<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Daniel Favour</title>
    <description>The latest articles on Forem by Daniel Favour (@danielfavour).</description>
    <link>https://forem.com/danielfavour</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/danielfavour"/>
    <language>en</language>
    <item>
      <title>Automating User Management on Linux using Bash Script</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Tue, 02 Jul 2024 21:10:30 +0000</pubDate>
      <link>https://forem.com/danielfavour/automating-user-management-on-linux-using-bash-script-3o9l</link>
      <guid>https://forem.com/danielfavour/automating-user-management-on-linux-using-bash-script-3o9l</guid>
      <description>&lt;p&gt;Efficient user management is important for maintaining security and productivity. Manual management of users and groups can be time-consuming, especially in larger organizations where administrators need to handle multiple accounts and permissions. Automating these tasks not only saves time but also reduces the risk of human error.&lt;/p&gt;

&lt;p&gt;This guide discusses a practical approach to automating user management on a Linux machine using a Bash script. You will learn how to create users, assign groups, generate secure passwords for users, and log actions using a single bash script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of the Bash Script Functionality
&lt;/h2&gt;

&lt;p&gt;Below is an overview of the tasks the Bash script will automate for efficient user management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read User Data:&lt;/strong&gt; The script will read a text file containing employee usernames and their corresponding group names.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Users and Groups:&lt;/strong&gt; It will create users and groups as specified in the text file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set Up Home Directories:&lt;/strong&gt; The script will set up home directories for each user with appropriate permissions and ownership.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate Secure Passwords:&lt;/strong&gt; It will generate random, secure passwords for the users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log Actions:&lt;/strong&gt; All actions performed by the script will be logged to the &lt;code&gt;/var/log/user_management.log&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store Passwords Securely:&lt;/strong&gt; Generated passwords will be securely stored in the &lt;code&gt;/var/secure/user_passwords.txt&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; The script will include error handling to manage scenarios such as existing users and groups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To get started with this tutorial, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Linux machine with administrative privileges.&lt;/li&gt;
&lt;li&gt;Basic knowledge of Linux commands.&lt;/li&gt;
&lt;li&gt;A text editor of choice (vim, nano, etc)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up the User Data File
&lt;/h2&gt;

&lt;p&gt;The first step is to create a text file containing the username for each employee and the groups to be assigned to each of them.&lt;/p&gt;

&lt;p&gt;In your terminal, create a &lt;code&gt;user_password.txt&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;user_passwords&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the below content into the file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;john;qa
jane;dev,manager
robert;marketing
emily;design,research
michael;devops
olivia;design,research
william;support
sophia;content,marketing
daniel;devops,sre
ava;dev,qa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is a list of usernames for the employees and their respective group(s).&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing the Bash script
&lt;/h2&gt;

&lt;p&gt;To start creating the Bash script, follow these steps in your terminal:&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Script file
&lt;/h3&gt;

&lt;p&gt;Open your terminal and run the following command to create an empty file named &lt;code&gt;create_users.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;create_users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use your preferred text editor to open the &lt;code&gt;create_users.sh&lt;/code&gt; file and begin writing the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="nx"&gt;create_users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(If you are using a different editor, replace nano with its command)&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the Shebang Line
&lt;/h3&gt;

&lt;p&gt;At the top of the &lt;code&gt;create_users.sh&lt;/code&gt; file, include the &lt;a href="https://linuxhandbook.com/shebang/" rel="noopener noreferrer"&gt;shebang&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line specifies the interpreter that will be used to execute the script. In this case,&lt;code&gt;#!/bin/bash&lt;/code&gt; indicates that the script should be run using the Bash shell.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check Root Privileges
&lt;/h3&gt;

&lt;p&gt;Creating users and groups typically requires administrative privileges because it involves modifying system files and configurations. After the shebang line, add the below configuration to ensure that the Bash script is executed with root privileges:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt; &lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ne&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;]];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;This script must be run as root.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;exit&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This checks if the script is running with root privileges. If the current user ID &lt;code&gt;($(id -u))&lt;/code&gt; does not equal &lt;code&gt;(-ne) 0&lt;/code&gt;, then the condition is true (indicating the script is not running as root), and the code within the &lt;code&gt;then ... fi&lt;/code&gt; block will execute accordingly. In this case, it will output &lt;code&gt;"This script must be run as root."&lt;/code&gt; to the terminal, and then the script will exit with a status of &lt;code&gt;1&lt;/code&gt;. This exit status is a signal to the operating system and any other processes that the script encountered an error and did not complete successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check that the Input File is Passed as an Argument
&lt;/h3&gt;

&lt;p&gt;In Bash scripting, "arguments" are the values or parameters provided to a script or command when it is invoked from the command line. For example, if you have a Bash script named &lt;code&gt;process_file.sh&lt;/code&gt; and you want to read an input file, &lt;code&gt;data.txt&lt;/code&gt; provided as an argument, in the terminal, you will execute it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./process_file.sh data.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside your Bash script (process_file.sh), you can access this argument using special variables like $1, $2, etc. $1 specifically refers to the first argument passed (data.txt in this case). Once the script captures the argument ($1), it can use it in various ways. For instance, it might open and read the file specified (data.txt), process its contents, or perform any other operation that the script is designed to do.&lt;/p&gt;

&lt;p&gt;In this case, the &lt;code&gt;create_users.sh&lt;/code&gt; script needs to read the user data file, &lt;code&gt;user_passwords.txt&lt;/code&gt; containing the usernames and groups of the employees so it can perform certain actions.&lt;/p&gt;

&lt;p&gt;Paste the below configuration in your script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ne&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;]];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Usage: $0 &amp;lt;input-file&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;exit&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if [[ ... ]]; then ... fi&lt;/code&gt;:&lt;/strong&gt;  This is a conditional statement in Bash. The code inside the &lt;code&gt;then ... fi&lt;/code&gt; block will execute only if the condition within the double square brackets &lt;code&gt;[[ ... ]]&lt;/code&gt; evaluates to true.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if [[ $# -ne 1 ]]; then&lt;/code&gt;:&lt;/strong&gt; This checks if the number of arguments ($#) passed to the script is not equal (-ne) to 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;echo "Usage: $0 &amp;lt;input-file&amp;gt;"&lt;/code&gt;:&lt;/strong&gt; If the condition is true (meaning the wrong number of arguments were provided), this line prints a helpful message to the terminal explaining how the script should be used.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Usage&lt;/code&gt;:&lt;/strong&gt; A standard keyword indicating the start of usage instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;$0&lt;/code&gt;:&lt;/strong&gt; This is a special variable that holds the name of the script itself. It is automatically replaced with the actual name of the script when it runs (for example, "create_users.sh").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;&amp;lt;input-file&amp;gt;&lt;/code&gt;:&lt;/strong&gt; This placeholder communicates to the user that they need to provide the name of the input file (for example,  "user_passwords.txt") as the argument when running the script.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;exit 1&lt;/code&gt;:&lt;/strong&gt;  This terminates the script with an exit status of 1. An exit status of 1 signals that the script encountered an error and did not complete successfully.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Assign Variables
&lt;/h3&gt;

&lt;p&gt;The next step is to assign variables for essential paths and files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;INPUT_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;$1&lt;/span&gt;
&lt;span class="nx"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/var/log/user_management.log&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;PASSWORD_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/var/secure/user_passwords.txt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;INPUT_FILE=$1&lt;/code&gt;:&lt;/strong&gt; This line assigns the first command-line argument (the &lt;code&gt;user_passwords.txt&lt;/code&gt; file) to the variable &lt;code&gt;INPUT_FILE&lt;/code&gt;. This makes it easier to reference the filename throughout the script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;LOG_FILE="/var/log/user_management.log"&lt;/code&gt;:&lt;/strong&gt; This sets the variable LOG_FILE to the path where the script will write its log messages. This log will help track the actions performed by the script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;PASSWORD_FILE="/var/secure/user_passwords.txt"&lt;/code&gt;:&lt;/strong&gt; This sets the variable &lt;code&gt;PASSWORD_FILE&lt;/code&gt; to the path where the generated passwords for the users will be stored securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Log Messages
&lt;/h3&gt;

&lt;p&gt;Logging messages is a common practice in scripting and software development as it records what happens at each step in the script.&lt;/p&gt;

&lt;p&gt;Add the below configuration to log messages in the $LOG_FILE directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nb"&gt;Function&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt;
&lt;span class="nf"&gt;log_message&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$(date '+%Y-%m-%d %H:%M:%S') - $1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;tee&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;$LOG_FILE&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message()&lt;/code&gt;:&lt;/strong&gt; This function is a reusable piece of code designed to create formatted log entries and write them to a specified log file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;echo "$(date '+%Y-%m-%d %H:%M:%S') - $1":
date '+%Y-%m-%d %H:%M:%S'&lt;/code&gt;:&lt;/strong&gt; This generates the current date and time in the format "YYYY-MM-DD HH:MM:SS". &lt;code&gt;$1&lt;/code&gt; represents the message you pass to the function as the first argument. This command combines the timestamp and the message, separated by a hyphen (-), creating the formatted log entry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;tee -a $LOG_FILE&lt;/code&gt;:&lt;/strong&gt; The &lt;code&gt;tee&lt;/code&gt; command reads from standard input (in this case, the output of the echo command) and writes it to both standard output (the terminal) and to one or more files. The &lt;code&gt;-a&lt;/code&gt; option tells &lt;code&gt;tee&lt;/code&gt; to append to the file ($LOG_FILE) instead of overwriting it. This ensures that previous log entries are preserved.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ensure the &lt;code&gt;/var/secure&lt;/code&gt; Directory Exists
&lt;/h3&gt;

&lt;p&gt;Before proceeding with user creation, it's imperative to establish a secure environment for storing the generated passwords. The passwords will be stored in the &lt;code&gt;/var/secure&lt;/code&gt; directory, so it is necessary to check if this directory exists and configure it with appropriate permissions to limit access to authorized users only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/var/secure&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;]];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/secur&lt;/span&gt;&lt;span class="err"&gt;e
&lt;/span&gt;    &lt;span class="nx"&gt;chown&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/secur&lt;/span&gt;&lt;span class="err"&gt;e
&lt;/span&gt;    &lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/secur&lt;/span&gt;&lt;span class="err"&gt;e
&lt;/span&gt;&lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if [[ ! -d "/var/secure" ]]; then&lt;/code&gt;:&lt;/strong&gt; This condition checks 
if the /var/secure directory exists. The &lt;code&gt;-d&lt;/code&gt; flag checks if the path is a directory, and the &lt;code&gt;!&lt;/code&gt; negates the result, meaning the code within the &lt;code&gt;then&lt;/code&gt; block will only execute if the directory does not exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;mkdir -p /var/secure&lt;/code&gt;:&lt;/strong&gt; This line creates the &lt;code&gt;/var/secure&lt;/code&gt; directory if it does not exist. The &lt;code&gt;-p&lt;/code&gt; option ensures that any necessary parent directories are also created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chown root:root /var/secure&lt;/code&gt;:&lt;/strong&gt; This changes the owner of the &lt;code&gt;/var/secure&lt;/code&gt; directory to the root user and the root group. This is a security best practice, as sensitive data like passwords should only be accessible to the system administrator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chmod 600 /var/secure&lt;/code&gt;:&lt;/strong&gt; This changes the permissions of the &lt;code&gt;/var/secure&lt;/code&gt; directory so that the owner (root) has full read and write access to the directory, no users in the root group (other than root itself) can access the directory's contents in any way and no other users on the system can access the directory's contents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;fi&lt;/code&gt;:&lt;/strong&gt; It is used to close an &lt;code&gt;if&lt;/code&gt; statement and indicates the end of the block of code that should be executed conditionally based on the evaluation of the if statement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Generate the User Passwords
&lt;/h3&gt;

&lt;p&gt;User passwords should be unique and secure. &lt;code&gt;/dev/urandom&lt;/code&gt; ensures your passwords are both random and secure. It is a special file in Unix-like systems that provides a constant stream of high-quality random data, making it difficult to predict or replicate the generated passwords.&lt;/p&gt;

&lt;p&gt;Paste the below configuration in the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nf"&gt;generate_password&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tr&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;dc&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;A-Za-z0-9!@#$%^&amp;amp;*()_+=-[]{}|;:&amp;lt;&amp;gt;,.?/~&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/dev/u&lt;/span&gt;&lt;span class="nx"&gt;random&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;head&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;generate_password() {&lt;/code&gt;:&lt;/strong&gt; This defines the start of a function named &lt;code&gt;generate_password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;tr -dc&lt;/code&gt;&lt;/strong&gt;: This command is used to delete all characters from the input that are not in the specified set. &lt;code&gt;tr&lt;/code&gt; stands for "translate" and is used to delete or replace characters, the &lt;code&gt;-d&lt;/code&gt; option specifies that characters should be deleted, and the &lt;code&gt;-c&lt;/code&gt; option complements the set of characters. This means that instead of deleting the characters specified in the set, it will delete all characters that are &lt;em&gt;not&lt;/em&gt; in the set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;'A-Za-z0-9!@#$%^&amp;amp;*()_+=-[]{}|;:&amp;lt;&amp;gt;,.?/~'&lt;/code&gt;:&lt;/strong&gt; This is the set of characters allowed in the password, including uppercase letters (A-Z), lowercase letters (a-z), digits (0-9), and various special characters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;&amp;lt;/dev/urandom |&lt;/code&gt;:&lt;/strong&gt; &lt;code&gt;/dev/urandom&lt;/code&gt; is a special file in Unix-like operating systems that provides random data. &lt;code&gt;&amp;lt;&lt;/code&gt; is used to redirect the contents of &lt;code&gt;/dev/urandom&lt;/code&gt; as input to the command on the left, and the file contents are passed through the pipe &lt;code&gt;|&lt;/code&gt; to the next command.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;head -c 16&lt;/code&gt;&lt;/strong&gt;: The &lt;code&gt;head&lt;/code&gt; command displays the first few lines of the file content that passes through the pipe, and the &lt;code&gt;-c 16&lt;/code&gt; option modifies &lt;code&gt;head&lt;/code&gt; to output only the first 16 bytes of its input.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Process the Input File
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;user_password.txt&lt;/code&gt; file that was passed in can now be used to carry out user management tasks. It will read the usernames and groups from the input file, create the users and their personal group, add them to their respective groups, set up their home directory, and generate and store their passwords. To execute these tasks efficiently, it's beneficial to keep them within a while loop so that each line of user data is processed sequentially, ensuring systematic user management operations.&lt;/p&gt;

&lt;p&gt;To create the while loop, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Read&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="nx"&gt;line&lt;/span&gt; &lt;span class="nx"&gt;by&lt;/span&gt; &lt;span class="nx"&gt;line&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nx"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;read&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="nx"&gt;groups&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;while ... do&lt;/code&gt;:&lt;/strong&gt; This starts a loop that continues to read lines from the input until there are no more lines to read.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;IFS=';'&lt;/code&gt;:&lt;/strong&gt; This sets the internal field separator (IFS) to a semicolon. It tells the read command to use semicolons as the delimiter for splitting input lines into fields.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;read -r&lt;/code&gt;:&lt;/strong&gt; Reads the input line into the variables username and groups.&lt;/li&gt;
&lt;li&gt;tr -d '[:space:]' removes all whitespace characters from the username and groups.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this loop, there will be several iterations that will be carried out:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration 1: Trim any leading/trailing whitespace&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Trim&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt; &lt;span class="nx"&gt;leading&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;trailing&lt;/span&gt; &lt;span class="nx"&gt;whitespace&lt;/span&gt;
    &lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;tr&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[:space:]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$groups&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;tr&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[:space:]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;username=$(echo "$username" | tr -d '[:space:]')&lt;/code&gt;:&lt;/strong&gt; This line removes all whitespace characters from the beginning and end of the username value.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;groups=$(echo "$groups" | tr -d '[:space:]')&lt;/code&gt;:&lt;/strong&gt; This line does the exact same thing as the first line, but for the groups variable. It removes any leading or trailing whitespace from the list of groups associated with the user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Logs the username and associated groups read from the input file to be certain the script is reading the &lt;code&gt;user_password.txt&lt;/code&gt; file correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Debug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Log&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;groups&lt;/span&gt; &lt;span class="nx"&gt;read&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;
    &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Read line: username='$username', groups='$groups'&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is useful for troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration 2: Check if usernames or groups are empty&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$groups&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;]];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
        &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error: Username or groups missing in line: $username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;[[ -z "$username" || -z "$groups" ]]&lt;/code&gt;:&lt;/strong&gt; &lt;code&gt;[[ ... ]]&lt;/code&gt; is a conditional expression in Bash for testing. &lt;code&gt;-z "$username"&lt;/code&gt; checks if the variable username is empty (has zero length). &lt;code&gt;-z "$groups"&lt;/code&gt; checks if the variable groups is empty (has zero length). &lt;code&gt;||&lt;/code&gt; is the logical OR operator, which means the condition is true if either $username or $groups (or both) are empty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Error: Username or groups missing in line: $username"&lt;/code&gt;:&lt;/strong&gt; If either &lt;code&gt;$username&lt;/code&gt; or &lt;code&gt;$groups&lt;/code&gt; is empty, this command writes an error message to the &lt;code&gt;$LOG_FILE&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;continue&lt;/code&gt;:&lt;/strong&gt; If the condition is true (i.e., either $username or $groups is empty), &lt;code&gt;continue&lt;/code&gt; skips the rest of the current iteration of the loop. The script then moves on to the next iteration to process the next line from the input file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 3: Check if the user already exists, otherwise create the user's personal group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="nx"&gt;already&lt;/span&gt; &lt;span class="nx"&gt;exists&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt;&lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
        &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User $username already exists, skipping.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
        &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Create&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;s personal group
        if ! getent group "$username" &amp;gt;/dev/null; then
            groupadd "$username"
            log_message "Created group: $username"
        fi
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if id "$username" &amp;amp;&amp;gt;/dev/null; then:&lt;/code&gt;&lt;/strong&gt; This &lt;code&gt;if&lt;/code&gt; statement checks if the user &lt;code&gt;$username&lt;/code&gt; exists by querying the user database. &lt;code&gt;&amp;amp;&amp;gt;/dev/null&lt;/code&gt; redirects both stdout (standard output) and stderr (standard error) to &lt;code&gt;/dev/null&lt;/code&gt;, discarding any output. If the user exists, the condition is true (id command succeeds), and the script proceeds inside the if block.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "User $username already exists, skipping."&lt;/code&gt;:&lt;/strong&gt; If the user already exists (id command succeeds), this message is logged and the user creation process is skipped for this user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;else&lt;/code&gt;:&lt;/strong&gt; If the user does not exist (the id command fails), the script proceeds with user and group creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if ! getent group "$username" &amp;gt;/dev/null; then&lt;/code&gt;:&lt;/strong&gt; This command checks if a group with the username already exists and proceeds only if it doesn't.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;groupadd "$username"&lt;/code&gt;:&lt;/strong&gt; Creates a new group with the same name as the username. This group will serve as the user's primary group, providing some basic permissions and ownership settings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Created group: $username"&lt;/code&gt;:&lt;/strong&gt; Calls the &lt;code&gt;log_message function&lt;/code&gt; to record the action of creating the group.
The message passed to the function includes the name of the group that was created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 4: Create the users with their personal group&lt;/strong&gt;&lt;br&gt;
To create users with their personal groups, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;        &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Create&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;personal&lt;/span&gt; &lt;span class="nx"&gt;group&lt;/span&gt;
        &lt;span class="nx"&gt;useradd&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Created user: $username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;useradd -m -g "$username" "$username"&lt;/code&gt;:&lt;/strong&gt; This creates a new user account with the specified username, creates their home directory, and assigns the user to their personal group.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Created user: $username"&lt;/code&gt;:&lt;/strong&gt; Calls the logging function and logs the username of the newly created account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 5: Assign passwords to users&lt;/strong&gt;&lt;br&gt;
For the users to have access, they should be assigned their own passwords each.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;        &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Generate&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;random&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;
        &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;generate_password&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username:$password&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;chpasswd&lt;/span&gt;
        &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Set password for user: $username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;password=$(generate_password)&lt;/code&gt;:&lt;/strong&gt; Calls the &lt;code&gt;generate_password&lt;/code&gt; function to generate a password, and stores the output within a variable named &lt;code&gt;password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;echo "$username:$password" | chpasswd&lt;/code&gt;:&lt;/strong&gt; This takes the generated password, formats it correctly, and passes it to the chpasswd command to set the password for the user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Set password for user: $username"&lt;/code&gt;:&lt;/strong&gt; The function logs the action taken, indicating that the password has been set for the user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 6: Add the user to additional groups&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;        &lt;span class="nx"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;read&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ra&lt;/span&gt; &lt;span class="nx"&gt;group_array&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="s"&gt;"$groups"&lt;/span&gt;
        &lt;span class="na"&gt;for&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt; &lt;span class="na"&gt;in&lt;/span&gt; &lt;span class="s"&gt;"${group_array[@]}"&lt;/span&gt;&lt;span class="err"&gt;;&lt;/span&gt; &lt;span class="na"&gt;do&lt;/span&gt;
            &lt;span class="na"&gt;if&lt;/span&gt; &lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="na"&gt;getent&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt; &lt;span class="s"&gt;"$group"&lt;/span&gt; &lt;span class="err"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;/dev/null; then
                groupadd "$group"
                log_message "Created group: $group"
            fi
            usermod -aG "$group" "$username"
            log_message "Added user $username to group: $group"
        done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;IFS=',' read -ra group_array &amp;lt;&amp;lt;&amp;lt; "$groups"&lt;/code&gt;:&lt;/strong&gt; &lt;code&gt;IFS=','&lt;/code&gt; sets the Internal Field Separator (IFS) to comma (,). This means that when the read command reads &lt;code&gt;$groups&lt;/code&gt;, it will split it into multiple parts using comma as the delimiter. &lt;code&gt;read -ra group_array &amp;lt;&amp;lt;&amp;lt; "$groups"&lt;/code&gt; reads the content of $groups into an array group_array, splitting it based on the comma delimiter (','), &lt;code&gt;-r&lt;/code&gt; prevents backslashes from being interpreted as escape characters and &lt;code&gt;-a group_array&lt;/code&gt; assigns the result to the array variable group_array.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;for group in "${group_array[@]}"; do&lt;/code&gt;:&lt;/strong&gt; Iterates over each element (group) in the group_array array. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;if ! getent group "$group" &amp;amp;&amp;gt;/dev/null; then&lt;/code&gt;:&lt;/strong&gt; &lt;code&gt;getent group "$group"&lt;/code&gt; checks if the group $group exists in the system, &lt;code&gt;!&lt;/code&gt; negates the result, meaning if the group does not exist (! getent ...), the condition becomes true. &lt;code&gt;&amp;amp;&amp;gt;/dev/null&lt;/code&gt; redirects both stdout and stderr to /dev/null, discarding any output. If the group does not exist, it proceeds with group creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;groupadd "$group"&lt;/code&gt;:&lt;/strong&gt; Creates the group $group if it does not already exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Created group: $group"&lt;/code&gt;:&lt;/strong&gt; Logs a message indicating that the group $group was successfully created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;usermod -aG "$group" "$username"&lt;/code&gt;:&lt;/strong&gt; Adds the user $username to the group $group. &lt;code&gt;-aG "$group": -a&lt;/code&gt; appends the user to the group without removing them from other groups &lt;code&gt;-G&lt;/code&gt; specifies a list of supplementary groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Added user $username to group: $group"&lt;/code&gt;:&lt;/strong&gt; Logs a message indicating that the user $username was successfully added to the group $group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 7: Create and set the home directory permissions&lt;/strong&gt;&lt;br&gt;
Users should have access to their individual home directories to perform certain actions. Add the below configuration in the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/home/$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;chown&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;R&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username:$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/home/$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="mi"&gt;755&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/home/$username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;mkdir -p "/home/$username"&lt;/code&gt;:&lt;/strong&gt; This creates the home directory for each new user, where $username is the username of the user being processed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chown -R "$username:$username" "/home/$username"&lt;/code&gt;:&lt;/strong&gt; This changes the ownership of the user's home directory and all files and subdirectories within it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chmod 755 "/home/$username&lt;/code&gt;:&lt;/strong&gt; This sets the permissions for the newly created user's home directory. &lt;code&gt;755&lt;/code&gt; means the owner ($username) has read, write, and execute permissions (rwx). Users in the same group as the owner and other users have read and execute permissions (r-x).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Iteration 8: Store the username and password securely&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;        &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Store&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="nx"&gt;securely&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$username,$password&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;$PASSWORD_FILE&lt;/span&gt;
        &lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$PASSWORD_FILE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;log_message&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Password for $username stored in $PASSWORD_FILE.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;echo "$username:$password" &amp;gt;&amp;gt; "$PASSWORD_FILE"&lt;/code&gt;:&lt;/strong&gt; This line stores the username and its corresponding generated password in a file designated for storing passwords securely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chmod 600 "$PASSWORD_FILE"&lt;/code&gt;:&lt;/strong&gt; This command restricts access to the password file to ensure it remains secure and confidential.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_message "Password for $username stored in $PASSWORD_FILE."&lt;/code&gt;:&lt;/strong&gt; Logs a message indicating that the password for $username has been stored in the password file ($PASSWORD_FILE).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fi&lt;/code&gt;: The &lt;code&gt;fi&lt;/code&gt; keyword marks the end of the entire &lt;code&gt;if...else&lt;/code&gt; conditional block, which started at iteration 3. It marks the end of the code block to execute if the condition is true.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  End the Loop
&lt;/h3&gt;

&lt;p&gt;End the loop using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expression &lt;code&gt;done &amp;lt; "$1"&lt;/code&gt; signifies the end of the while loop and instructs it to read input from the file specified by the &lt;code&gt;INPUT_FILE=$1&lt;/code&gt; variable.&lt;/p&gt;

&lt;p&gt;The complete script is available in &lt;a href="https://github.com/FavourDaniel/user-management-in-linux" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the Script
&lt;/h2&gt;

&lt;p&gt;To execute the Bash script without calling Bash in your terminal, make the script executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="nx"&gt;create_users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the script is executable, run it from the directory where it resides:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;create_users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="nx"&gt;user_passwords&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verify the Script Executed Tasks Successfully
&lt;/h2&gt;

&lt;p&gt;Several checks should be carried out to ensure that the script executed all the user management tasks successfully.&lt;/p&gt;

&lt;p&gt;To verify user existence, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify that the user password file was successfully created in the /var/secure/ directory, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/secure/u&lt;/span&gt;&lt;span class="nx"&gt;ser_passwords&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify group existence, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;getent&lt;/span&gt; &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the log file as created and all actions are logged correctly without any errors or unexpected behaviors, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/log/u&lt;/span&gt;&lt;span class="nx"&gt;ser_management&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To verify that each user has a personal group with the same name as their username, run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;getent&lt;/span&gt; &lt;span class="nx"&gt;passwd&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nx"&gt;getent&lt;/span&gt; &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article highlights the automation of user management tasks using Bash scripts. It explores how scripting enhances efficiency in creating users, managing groups, setting permissions, and securing passwords on Linux systems.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting up a LAMP Stack application</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Sun, 31 Mar 2024 03:44:09 +0000</pubDate>
      <link>https://forem.com/danielfavour/setting-up-a-lamp-stack-application-5814</link>
      <guid>https://forem.com/danielfavour/setting-up-a-lamp-stack-application-5814</guid>
      <description>&lt;p&gt;The LAMP stack is a powerful combination of open-source technologies, enabling developers to build dynamic and interactive websites and web applications. &lt;/p&gt;

&lt;p&gt;This article is a comprehensive guide to containerizing a LAMP (Linux, Apache, MySQL, PHP) stack application. It covers everything from setting up the application to running the containers using Docker commands and Docker Compose, along with monitoring the containers. To improve readability, it has been split up into different parts to form a series. This first part focuses on setting up the application on your local machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F256314580-e1dedd92-4c23-4fcb-92f6-c88ed87715ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F256314580-e1dedd92-4c23-4fcb-92f6-c88ed87715ab.png" alt="image edit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please check the below embed to view the image in higher resolution:&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://user-images.githubusercontent.com/89241109/256314580-e1dedd92-4c23-4fcb-92f6-c88ed87715ab.png" rel="noopener noreferrer"&gt;
      user-images.githubusercontent.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;In the above architecture diagram, users initiate an HTTP request by accessing the application through the browser using either "localhost" or the server's IP address. The server, with Apache installed, responds by serving the "form.html" file to users, prompting them to fill in their details, including their name, email, and description.&lt;/p&gt;

&lt;p&gt;Upon completing the form, users submit the data back to the server. Apache then forwards the submitted data to a PHP script responsible for storing this information in the MySQL database. If the data is successfully stored, MySQL communicates this success to the PHP script, which responds with an HTML message displayed in the user's browser. On the other hand, if there is an issue while saving the data, the PHP script returns an error message to the user's browser, notifying them of the encountered problem.&lt;/p&gt;

&lt;p&gt;This robust architecture ensures a seamless flow of data between users, Apache, PHP, and MySQL, providing a smooth user experience and reliable data management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, ensure that you have the following in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; or &lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/compose/install/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt; (If you are using Docker Desktop, it comes with it already)&lt;/li&gt;
&lt;li&gt;An IDE, &lt;a href="https://code.visualstudio.com/Download" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt; recommended&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Application Setup
&lt;/h2&gt;

&lt;p&gt;For setup and testing purposes, the application will be deployed and tested on a Linux Ubuntu OS, utilizing VirtualBox and Vagrant within a macOS environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;At the beginning of the build, the environment will resemble the structure below but overtime as we create more folders and files, the structure will change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── form_submit.php
├── form.html
├── install.sh
├── setup.sh
└── vagrantfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup the Linux Environment
&lt;/h3&gt;

&lt;p&gt;To setup the Linux Environment on mac, we will be utilizing Vagrant and Virtual Box. A script has been provided to automatically provision them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; This step is applicable to mac users only. Linux users can proceed to the installation section.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an &lt;code&gt;install.sh&lt;/code&gt; file in the root of the project and copy the below contents into it.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;VirtualBox&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;VBoxManage&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;VirtualBox not found. Installing VirtualBox...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;cask&lt;/span&gt; &lt;span class="nx"&gt;virtualbox&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;VirtualBox is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;fi&lt;/span&gt;


&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;Vagrant&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;vagrant&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Vagrant not found. Installing Vagrant...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;cask&lt;/span&gt; &lt;span class="nx"&gt;vagrant&lt;/span&gt;
    &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;cask&lt;/span&gt; &lt;span class="nx"&gt;vagrant&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;manager&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
    &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Vagrant is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Give the script executable permission and run it with the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will automatically check if Vagrant and Virtual Box are already installed on your system, if they are installed outputs a message to let you know. However, if they are not found, the script will proceed to install both Vagrant and VirtualBox for you seamlessly.&lt;/p&gt;

&lt;p&gt;Also in the root of the project folder, create a &lt;code&gt;vagrantfile&lt;/code&gt; file and paste the below contents into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;NUM_CONTROLLER_NODE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="nx"&gt;IP_NTW&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;192.168.56.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;CONTROLLER_IP_START&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="nx"&gt;NODE_IP_START&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

&lt;span class="nx"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ubuntu/bionic64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="nx"&gt;NUM_CONTROLLER_NODE&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; 
        &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;define&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dockertask&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
            &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;virtualbox&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
                &lt;span class="nx"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dockertask&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
                &lt;span class="nx"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2048&lt;/span&gt;
                &lt;span class="nx"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
            &lt;span class="nx"&gt;end&lt;/span&gt;

            &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dockertask&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;private_network&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IP_NTW&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#{CONTROLLER_IP_START + i}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;forwarded_port&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;guest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#{2710 + i}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;end&lt;/span&gt;
    &lt;span class="nx"&gt;end&lt;/span&gt;
&lt;span class="nx"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This vagrantfile configuration helps in automatically setting up the Linux VM, utilizing an &lt;code&gt;ubuntu/bionic64&lt;/code&gt; image for the setup. The VM has been given a name &lt;code&gt;dockertask&lt;/code&gt; which you are free to modify. To provision the VM, run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;vagrant&lt;/span&gt; &lt;span class="nx"&gt;up&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;provision&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the Virtual Box application to see the setup VM&lt;/p&gt;

&lt;p&gt;NB: To seamlessly copy files from your local (macOs) to the Linux VM, you can use the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;vagrant&lt;/span&gt; &lt;span class="nx"&gt;scp&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/absolute-path-of-the-project-directory-to-the project-folder/&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="na"&gt;of-vm&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:/home/vagrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It should look like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vagrant scp /Users/favour/Desktop/Modules/module-2/ dockertask:/home/vagrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the command should always be ran every time a change has been made on your local so that the Linux VM has the updated version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;With the VM now provisioned, the next step involves installing Apache, MySQL, and PHP, creating the essential LAMP stack foundation for the application. A script has also been provided to automate the installation process. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;setup.sh&lt;/code&gt; file at the root of the project folder, copy and paste the below contents into it:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;update&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;VM&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;update&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Install&lt;/span&gt; &lt;span class="nx"&gt;apache&lt;/span&gt; &lt;span class="nx"&gt;web&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;start&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;apache2&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;service&lt;/span&gt; &lt;span class="nx"&gt;apache2&lt;/span&gt; &lt;span class="nx"&gt;start&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Install&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;go&lt;/span&gt; &lt;span class="nx"&gt;through&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Please&lt;/span&gt; &lt;span class="nx"&gt;remember&lt;/span&gt; &lt;span class="nx"&gt;your&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql_secure_installation&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Install&lt;/span&gt; &lt;span class="nx"&gt;PHP&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;libapache2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;mod&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apache2ctl&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;M&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;a2dismod&lt;/span&gt; &lt;span class="nx"&gt;mpm_event&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;a2enmod&lt;/span&gt; &lt;span class="nx"&gt;mpm_prefork&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;a2enmod&lt;/span&gt; &lt;span class="nx"&gt;php7&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;init&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt; &lt;span class="nx"&gt;restart&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What each command does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get update -y&lt;/code&gt;:&lt;/strong&gt; This updates the package lists and ensures that the package information is up-to-date before proceeding with any installations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install apache2 -y&lt;/code&gt;:&lt;/strong&gt; This installs the Apache web server on the system. The -y flag allows the installation to proceed automatically without asking for user confirmation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo service apache2 start&lt;/code&gt;:&lt;/strong&gt; This command starts the Apache web server, so it becomes active and can serve web pages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install mysql-server -y&lt;/code&gt;:&lt;/strong&gt; This installs the MySQL database server. The -y flag allows the installation to proceed automatically without asking for user confirmation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo /usr/bin/mysql_secure_installation&lt;/code&gt;:&lt;/strong&gt; This script guides you through a series of steps to set up MySQL securely. It prompts you to configure the root password, remove anonymous users, disable remote root login, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install php -y&lt;/code&gt;:&lt;/strong&gt; This installs PHP, a server-side scripting language, on the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install php-mysql -y&lt;/code&gt;:&lt;/strong&gt; This installs the PHP MySQL extension, which allows PHP to communicate with a MySQL database. This extension is required if your PHP application needs to interact with a MySQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install libapache2-mod-php -y&lt;/code&gt;:&lt;/strong&gt; This installs the PHP module for Apache web server (libapache2-mod-php) along with its dependencies. The -y flag allows the installation to proceed automatically without asking for user confirmation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apache2ctl -M&lt;/code&gt;:&lt;/strong&gt; This command lists all the loaded Apache modules. It is used to check if the PHP module (mod_php) is successfully loaded after installation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo a2dismod mpm_event&lt;/code&gt;:&lt;/strong&gt; This disables the Apache event module (mpm_event) to switch to the prefork module, which is required for running PHP with Apache.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo a2enmod mpm_prefork&lt;/code&gt;:&lt;/strong&gt; This enables the Apache prefork module (mpm_prefork), which is necessary to work with PHP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo a2enmod php7.2&lt;/code&gt;:&lt;/strong&gt; This enables the PHP module (mod_php) in Apache for PHP version 7.2. Replace 7.2 with the appropriate version if you are using a different PHP version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo /etc/init.d/apache2 restart&lt;/code&gt;:&lt;/strong&gt; This restarts the Apache web server to apply the changes made by enabling the PHP module and switching to the prefork module.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To execute the script, run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="nx"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A prompt will come up asking you to pick a password level for mysql, it should look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is best to give a password for security purposes, so input "y", then you will get the following prompt asking you to pick a password level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOW    Length &amp;gt;= 8
MEDIUM Length &amp;gt;= 8, numeric, mixed case, and special characters
STRONG Length &amp;gt;= 8, numeric, mixed case, special characters and dictionary

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For securing the MySQL server, you have the option to select the password complexity level of your preference. If you choose the "LOW" length, which is set to "0", you could use a password like &lt;code&gt;dockertask&lt;/code&gt; to meet the requirements. &lt;/p&gt;

&lt;p&gt;It's important to note that selecting a more complex password, especially for production environments, is strongly recommended to enhance security. However, for local development or testing purposes, a simple password like &lt;code&gt;dockertask&lt;/code&gt; can be used. &lt;/p&gt;

&lt;p&gt;Enabling password validation during the MySQL setup will prompt the system to evaluate the strength of the root password you provided. The server will then present you with the password strength assessment. If you are satisfied with the current password, you can proceed by entering "Y" for "yes" at the prompt. This step ensures that you have reviewed the password strength and are content with the chosen password before proceeding with the setup. &lt;/p&gt;

&lt;p&gt;It should resemble the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Estimated strength of the password: 100 
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After completing the MySQL setup, the script will proceed with the remaining instructions. To ensure that PHP has been successfully installed, you can confirm it by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;php -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Executing this command will display the installed PHP version along with relevant details, affirming that PHP is now operational and ready for use in your environment.&lt;/p&gt;

&lt;p&gt;Now, open your web browser and enter the IP address of the server, which should be &lt;code&gt;192.168.56.2&lt;/code&gt; or &lt;code&gt;localhost&lt;/code&gt; depending on the environment you are working from. If everything is set up correctly, you should see the Apache web server running, and it will display the default Apache landing page or any other content that you might have configured. This confirms that Apache is up and running and successfully serving web pages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To verify that you can access the MySQL console with the updated root user password, execute the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB: The &lt;code&gt;-p&lt;/code&gt; flag prompts you to enter the password you set during the MySQL setup. After providing the correct password, you should be able to log in to the MySQL console, where you can manage and interact with the MySQL database.&lt;/p&gt;

&lt;p&gt;To exit the MySQL console, simply type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will take you out of the MySQL console and return you to the regular command prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup the Application
&lt;/h3&gt;

&lt;p&gt;In the root of the project folder, create a &lt;code&gt;form.html&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below contents into it
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;head&amp;gt;
    &amp;lt;title&amp;gt;
    Test Page
    &amp;lt;/title&amp;gt;
    &amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
    &amp;lt;form action="http://localhost/form_submit.php" class="alt" method="POST"&amp;gt;
    &amp;lt;div class="row uniform"&amp;gt;
    &amp;lt;div class="name"&amp;gt;
    &amp;lt;input name="name" id="" placeholder="Name" type="text"&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class="email"&amp;gt;
    &amp;lt;input name="email" placeholder="Email" type="email"&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class="message"&amp;gt;
    &amp;lt;textarea name="message" placeholder="Message" rows="4"&amp;gt;&amp;lt;/textarea&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;br/&amp;gt;
    &amp;lt;input class="alt" value="Submit" name="submit" type="submit"&amp;gt;
    &amp;lt;/form&amp;gt;
    &amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the same directory, create a &lt;code&gt;form_submit.php&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch form_submit.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below contents into it:'
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
$host = getenv('DB_HOST');
$db_name = getenv('MYSQL_DATABASE');
$username = getenv('DB_USER');
$password = getenv('MYSQL_PASSWORD');
$option = array(
    PDO::ATTR_ERRMODE =&amp;gt; PDO::ERRMODE_EXCEPTION
  );

# Catch errors
try{
    $connection = new PDO("mysql:host=" . $host . ";dbname=" . $db_name, $username, $password, $option);
    $connection-&amp;gt;exec("set names utf8");
} catch(PDOException $exception){
    echo "Connection error: " . $exception-&amp;gt;getMessage();
}


function saveData($name, $email, $message){
    global $connection;
    $query = "INSERT INTO test(name, email, message) VALUES( :name, :email, :message)";

    try {
        $callToDb = $connection-&amp;gt;prepare( $query );
        $name=htmlspecialchars(strip_tags($name));
        $email=htmlspecialchars(strip_tags($email));
        $message=htmlspecialchars(strip_tags($message));
        $callToDb-&amp;gt;bindParam(":name",$name);
        $callToDb-&amp;gt;bindParam(":email",$email);
        $callToDb-&amp;gt;bindParam(":message",$message);

        if($callToDb-&amp;gt;execute()){
            return '&amp;lt;h3 style="text-align:center;"&amp;gt;Your information has been submitted to the database successfully!&amp;lt;/h3&amp;gt;';
        }   else {
            return '&amp;lt;h3 style="text-align:center;"&amp;gt;Failed to save data.&amp;lt;/h3&amp;gt;';
        }
    } catch (PDOException $exception) {
        return '&amp;lt;h3 style="text-align:center;"&amp;gt;Error: ' . $exception-&amp;gt;getMessage() . '&amp;lt;/h3&amp;gt;';
    }
}


if( isset($_POST['submit'])){
    $name = htmlentities($_POST['name']);
    $email = htmlentities($_POST['email']);
    $message = htmlentities($_POST['message']);

    //then you can use them in a PHP function. 
    $result = saveData($name, $email, $message);
    echo $result;
} else{
    echo '&amp;lt;h3 style="text-align:center;"&amp;gt;A very detailed error message ( ͡° ͜ʖ ͡°)&amp;lt;/h3&amp;gt;';
}
?&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;.env&lt;/code&gt; file which will contain environment variables for the PHP script to use in connecting to the database:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below content into the file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DB_HOST=mysql
MYSQL_DATABASE=dev_to
DB_USER=root
MYSQL_PASSWORD=dockertask
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Virtual Host
&lt;/h3&gt;

&lt;p&gt;Creating a Virtual Host in Apache provides numerous benefits, such as hosting multiple websites on a single server while maintaining separate configurations for each site. It conserves IP addresses, streamlines website management, and enhances security by isolating websites from one another.&lt;/p&gt;

&lt;p&gt;While it is &lt;strong&gt;not mandatory&lt;/strong&gt; to create a virtual host, doing so is highly beneficial, especially when dealing with multiple websites that need to be served using Apache. We will be creating a virtual host for learning purposes as this process helps you gain familiarity with Apache's directory configurations, enabling you to manage websites more effectively in the future. &lt;/p&gt;

&lt;p&gt;In case you choose not to create a virtual host, you can proceed to work with the default directory located at &lt;code&gt;/var/www/html&lt;/code&gt;. This directory serves as the default location for hosting websites on the Apache web server. &lt;/p&gt;

&lt;p&gt;For better organization, we will create a directory called &lt;code&gt;dockertask&lt;/code&gt; to house the project files, making it easier to manage and serve them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create the directory for the &lt;code&gt;dockertask&lt;/code&gt;, run the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;dockertask&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To ensure proper ownership and permissions, assign the directory dockertask to your current system user by running the below command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;chown&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;R&lt;/span&gt; &lt;span class="nx"&gt;$USER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;$USER&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;projectlamp&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;R&lt;/span&gt; &lt;span class="mi"&gt;755&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;dockertask&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow you to work with the project files without encountering any permission issues.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create and open a new configuration file, &lt;code&gt;dockertask.conf&lt;/code&gt;, in Apache’s sites-available directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;sites&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;available&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;dockertask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a new blank file. Paste in the following bare-bones configuration, then do a &lt;code&gt;ctrl+x&lt;/code&gt; and type y for yes to save it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost 192.168.56.2:80&amp;gt;
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/dockertask
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
&amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are working directly from a Linux OS, use the belo configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost *:80&amp;gt;
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/dockertask
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
&amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set the ServerName directive globally, add the ServerName at the top or bottom of the file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;On the first line of the &lt;code&gt;apache2.conf&lt;/code&gt; file, add the below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ServerName 192.168.56.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leave the ServerName as &lt;code&gt;localhost&lt;/code&gt; if you are working directly with a Linux OS.&lt;/p&gt;

&lt;p&gt;Setting &lt;code&gt;ServerName&lt;/code&gt; globally in Apache means specifying a default server name that will be used for any virtual host that does not explicitly define its own ServerName.&lt;/p&gt;

&lt;p&gt;In Apache HTTP Server, the ServerName directive is used to define the hostname and port number of the virtual host. A virtual host allows you to run multiple websites on the same physical server, and each virtual host can have its own ServerName that corresponds to a unique domain or hostname.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the &lt;code&gt;ls&lt;/code&gt; command to confirm the &lt;code&gt;dockertask.conf&lt;/code&gt; file exists in apache's sites-available directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;ls&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;sites&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;available&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The output should resemble the below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;000-default.conf  default-ssl.conf  dockertask.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable the newly created virtual host:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;a2ensite&lt;/span&gt; &lt;span class="nx"&gt;dockertask&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; We need to disable the default website that comes installed with Apache. This is required if you’re not using a custom domain name, because in this case Apache’s default configuration would overwrite your virtual host. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To disable Apache’s default website use &lt;code&gt;a2dissite&lt;/code&gt; command , run the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;a2dissite&lt;/span&gt; &lt;span class="mi"&gt;000&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To make sure the configuration file doesn’t contain syntax errors, run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apache2ctl&lt;/span&gt; &lt;span class="nx"&gt;configtest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The output should resemble the below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Syntax OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Finally, reload Apache so the changes take effect:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;systemctl&lt;/span&gt; &lt;span class="nx"&gt;reload&lt;/span&gt; &lt;span class="nx"&gt;apache2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new website is now active, but the web root &lt;code&gt;/var/www/dockertask&lt;/code&gt; is still empty.&lt;/p&gt;

&lt;p&gt;Now we move the required folders, &lt;code&gt;form.html&lt;/code&gt; and &lt;code&gt;form_submit.php&lt;/code&gt;, to this directory from the home directory where the files exist&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To move the files, run the below command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mv&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;dockertask&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mv&lt;/span&gt; &lt;span class="nx"&gt;form_submit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;dockertask&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, Apache prioritises the &lt;code&gt;index.html&lt;/code&gt; file over &lt;code&gt;index.php&lt;/code&gt;, making it the landing page for the application. After maintenance, simply renaming or removing the index.html from the document root restores the regular application page.&lt;/p&gt;

&lt;p&gt;To change this behavior, we need to edit the &lt;code&gt;/etc/apache2/mods-enabled/dir.conf&lt;/code&gt; file and change the order in which the index.php file is listed within the DirectoryIndex directive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mods&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;dir&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Change the existing configuration to look like the below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;IfModule mod_dir.c&amp;gt;
        #Change this:
        #DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
        #To this:
        DirectoryIndex form.html form_submit.php index.cgi index.pl index.xhtml index.htm
&amp;lt;/IfModule&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After saving and closing the file, reload Apache for the changes to take effect:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;systemctl&lt;/span&gt; &lt;span class="nx"&gt;reload&lt;/span&gt; &lt;span class="nx"&gt;apache2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup Mysql database
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Log into mysql to set up a database
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a new MySQL user:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the MySQL shell, run the following query to create a new user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;CREATE&lt;/span&gt; &lt;span class="nx"&gt;USER&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vagrant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;192.168.56.2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;IDENTIFIED&lt;/span&gt; &lt;span class="nx"&gt;BY&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Strongpassword@123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; You can choose a different user and password for the above. Also remember to change the server address depending on the environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grant privileges to the new user:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, grant the necessary privileges to the new user and flush the privileges to ensure the changes take effect immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;GRANT&lt;/span&gt; &lt;span class="nx"&gt;ALL&lt;/span&gt; &lt;span class="nx"&gt;PRIVILEGES&lt;/span&gt; &lt;span class="nx"&gt;ON&lt;/span&gt; &lt;span class="nx"&gt;dev_to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;TO&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vagrant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;192.168.56.2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;FLUSH&lt;/span&gt; &lt;span class="nx"&gt;PRIVILEGES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Troubleshooting&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you run into access denied error for root user after providing your password, troubleshoot with &lt;a href="https://stackoverflow.com/questions/39281594/error-1698-28000-access-denied-for-user-rootlocalhost" rel="noopener noreferrer"&gt;this&lt;/a&gt;. It is a common issue with ubuntu/linux systems&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Create a database using the below SQL commands:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="nx"&gt;database&lt;/span&gt; &lt;span class="nx"&gt;dev_to&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="nx"&gt;dev_to&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="nx"&gt;int&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;NULL&lt;/span&gt; &lt;span class="nx"&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nf"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;email&lt;/span&gt; &lt;span class="nf"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;PRIMARY&lt;/span&gt; &lt;span class="nc"&gt;KEY &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Confirm the database was created;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;describe&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see an output resembling the below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; describe test;
+---------+--------------+------+-----+---------+----------------+
| Field   | Type         | Null | Key | Default | Extra          |
+---------+--------------+------+-----+---------+----------------+
| id      | int(11)      | NO   | PRI | NULL    | auto_increment |
| name    | varchar(255) | YES  |     | NULL    |                |
| email   | varchar(255) | YES  |     | NULL    |                |
| message | text         | YES  |     | NULL    |                |
+---------+--------------+------+-----+---------+----------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Exit the MySQL shell:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fix binding settings for mysql
&lt;/h3&gt;

&lt;p&gt;Skip this step if you are working from a Linux OS with your ServerName as &lt;code&gt;localhost&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;By default, MySQL is bound to the local host (127.0.0.1). However, since the Vagrant server has its own unique IP address setup, we must configure MySQL to bind to this specific IP address.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the file for mysql conf
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysqld&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cnf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Search for &lt;code&gt;bind-address&lt;/code&gt; and change it from &lt;code&gt;0.0.0.0&lt;/code&gt; to &lt;code&gt;192.168.56.2&lt;/code&gt;, save and exit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restart the mysql server&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;service&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="nx"&gt;restart&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any information filled into the html form will automatically be saved in this database. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To run checks, log into the mysql database
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt; &lt;span class="nx"&gt;root&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check for existing databases and identify the one we created earlier, &lt;code&gt;dev_to&lt;/code&gt;. Once located, we can proceed to examine the tables residing under it, including the &lt;code&gt;test&lt;/code&gt; table.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;SHOW&lt;/span&gt; &lt;span class="nx"&gt;DATABASES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;USE&lt;/span&gt; &lt;span class="nx"&gt;dev_to&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;SHOW&lt;/span&gt; &lt;span class="nx"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;describe&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;FROM&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we explored the process of setting up a fully functional LAMP stack application and learned how the components communicate with each other. In the next article, we will explore how to containerize the application using Docker.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Credits to &lt;a href="https://dev.to/satellitebots/create-a-web-server-and-save-form-data-into-mysql-database-using-php-beginners-guide-fah"&gt;Adnan Alam&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Container Monitoring: Ensuring Application Performance and Health</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Sun, 06 Aug 2023 15:24:52 +0000</pubDate>
      <link>https://forem.com/danielfavour/container-monitoring-ensuring-application-performance-and-health-kcj</link>
      <guid>https://forem.com/danielfavour/container-monitoring-ensuring-application-performance-and-health-kcj</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/danielfavour/containerizing-a-lamp-stack-application-4pf1"&gt;article&lt;/a&gt;, we successfully containerized the LAMP stack application and conducted health checks on the containers. In this article, we will focus effectively monitoring these containers.&lt;/p&gt;

&lt;p&gt;Container Monitoring is essential for maintaining the stability and efficiency of containerized applications. Monitoring containers allows you to gain insights into the resource usage, performance, and health of individual containers running within pods. It helps you identify potential issues with specific containers, track resource consumption, and troubleshoot application problems at a granular level.&lt;/p&gt;

&lt;p&gt;To effectively monitor the containers, we will utilize tools like cAdvisor, Prometheus, MySQLd Exporter, and Grafana. We will look into them individually as we progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;p&gt;For the setup, we could install them directly on the host OS. However, given that the article is container-focused, we will deploy them within containers. This approach simplifies the process and cleanup after the tasks have been completed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;This is an overview of what the setup will look like and how each component communicates with the other:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm4m1n9dcwh5pqh8b9p4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm4m1n9dcwh5pqh8b9p4.png" alt="Arch diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please check the below embed to view in higher resolution&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://user-images.githubusercontent.com/89241109/259898594-32b2c25f-e8a6-4cbd-a1c0-9c9d9bf63ded.png" rel="noopener noreferrer"&gt;
      user-images.githubusercontent.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;The folder structure for this article will look like the below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.
├── .env
├── docker-compose.yml
├── install.sh
├── mysql
│   ├── db.sql
│   └── makefile
├── monitoring
│   ├── alertmanager
│   │   └── alert.yml
│   ├── docker-compose.yml
│   ├── grafana
│   │   └── grafana_db
│   ├── prometheus
│   │   ├── prometheus.yml
│   │   └── rules.yml
│   ├── .env
│   └── .my.cnf
├── php
│   ├── .env
│   ├── Dockerfile
│   ├── form.html
│   ├── form_submit.php
│   └── makefile
├── setup.sh
└── vagrantfile


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The only thing new here is the &lt;code&gt;monitoring&lt;/code&gt; directory, we already created the rest directories and files in the previous article.&lt;/p&gt;
&lt;h3&gt;
  
  
  cAdvisor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/google/cadvisor" rel="noopener noreferrer"&gt;cAdvisor&lt;/a&gt; (Container Advisor) is an open-source monitoring and performance analysis tool specifically designed for containerized environments. It provides real-time insights into container resource usage and performance characteristics on a host system by collecting metrics like CPU, memory, network stats, and file system use at regular intervals for monitoring and analysis.&lt;/p&gt;

&lt;p&gt;Our objective here is to set up cAdvisor to monitor and extract metrics from the containers. Subsequently, Prometheus will be configured to scrape these metrics, enabling them to be collected and processed. Prometheus utilizes the metrics it scrapes from cAdvisor to enable comprehensive monitoring and management of the containers. These metrics are stored as time series data, forming the foundation for analysis, visualization, and querying.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For cAdvisor to scrape metrics from containers, it needs access to the Docker data directory and Docker socket (which is essentially the Docker API) in order to effectively scrape metrics from containers and provide monitoring information. Since we will be running the containers with Docker compose, in our Docker Compose file, we will mount the relevant paths with container information inside the cAdvisor container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you choose not to use cAdvisor to collect metrics, you can use the Docker Daemon or Engine. If you're using Docker Desktop, go to your Docker Desktop settings, select "Docker Engine," and add the following configuration:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"metrics-addr" : "127.0.0.1:9323",


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;and change "experimental" from false to true. The final configuration should look like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": true,
  "features": {
    "buildkit": true
  },
  "metrics-addr": "127.0.0.1:9323"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;or like this&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "metrics-addr" : "127.0.0.1:9323",
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": true,
  "features": {
    "buildkit": true
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Click on the &lt;code&gt;Apply &amp;amp; Restart&lt;/code&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NB: If you have docker engine installed on your system, go to &lt;code&gt;/etc/docker/docker-daemon&lt;/code&gt; directory and do the same thing. &lt;/p&gt;

&lt;p&gt;With this, the Docker Engine exposes the metrics at &lt;code&gt;http://localhost:9323&lt;/code&gt; for Prometheus to scrape.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To see the metrics from the Docker Daemon/Engine, visit &lt;code&gt;http://localhost:9323/metrics&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmeteztws4oqnh6giads.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmeteztws4oqnh6giads.png" alt="Metrics Docker"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; is an open-source systems monitoring and alerting tool. In the context of cAdvisor, Prometheus scrapes (pulls) the metrics collected by cAdvisor from the Docker Daemon about containers and stores them in its database as time series data. This allows the metrics to be easily queried, analysed, and visualised for monitoring and understanding container behaviour.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;monitoring&lt;/code&gt; directory, create a &lt;code&gt;prometheus&lt;/code&gt; folder and a &lt;code&gt;prometheus.yml&lt;/code&gt; file inside it:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;
&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;
&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Paste the below contents into the &lt;code&gt;prometheus.yml&lt;/code&gt;file:
```jsx
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;global:&lt;br&gt;
  scrape_interval: 5s&lt;br&gt;
  external_labels:&lt;br&gt;
    monitor: 'docker-container-monitor'&lt;/p&gt;

&lt;p&gt;rule_files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rules.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;alerting:&lt;br&gt;
  alertmanagers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;static_configs:

&lt;ul&gt;
&lt;li&gt;targets:

&lt;ul&gt;
&lt;li&gt;alertmanager:9093&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;scrape_configs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;job_name: 'prometheus' &lt;br&gt;
static_configs: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;targets: ['prometheus:9090']&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;job_name: 'cadvisor'&lt;br&gt;
scrape_interval: 5s&lt;br&gt;
static_configs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;targets: ['cadvisor:8080']&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;job_name: 'mysqld_exporter'&lt;br&gt;
scrape_interval: 5s&lt;br&gt;
static_configs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;targets: ['mysqld_exporter:9104']&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;job_name: 'node_exporter'&lt;br&gt;
scrape_interval: 5s&lt;br&gt;
static_configs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;targets: ['node-exporter:9100']&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Let's break down what each section does:

- **`global:`** This section contains global configuration settings for Prometheus. The `scrape_interval` specifies the time interval at which Prometheus should scrape metrics (in this case, every 5 seconds). The `external_labels` section is used to add additional labels to all collected metrics. In this example, the label "monitor" with the value "docker-container-monitor" is added to all metrics.

- **`rule_files:`** This specifies the paths to one or more rule files. In this case, the file named "rules.yml" contains alerting rules that Prometheus will evaluate against the collected metrics. We will be looking at this soon.

- **`alerting:`** This section configures how Prometheus sends alerts. It specifies the target addresses of Alertmanager instances. We will also be looking at this soon.

- **`scrape_configs:`** This is the heart of the configuration, defining various jobs for scraping metrics. Each `job_name` corresponds to a specific type of metric source. Under each `job_name`, the `static_configs` section lists the targets (endpoints) from which Prometheus should scrape metrics.

- The first **`job_name`** is "prometheus," which targets the Prometheus instance itself. This self-monitoring capability ensures that Prometheus remains reliable and continues to provide accurate data for other monitoring and alerting tasks.

- The second **`job_name`** is "cadvisor," targeting `cAdvisor` for container-level metrics with a scrape interval of 5 seconds.

- The third **`job_name`** is "mysqld_exporter," targeting the MySQL Exporter for database-related metrics with a scrape interval of 5 seconds.

- The fourth **`job_name`** is "node_exporter," targeting the Node Exporter for host machine operating system and hardware-related metrics with a scrape interval of 5 seconds.

If you went the Docker Daemon path as shown previously rather than using cAdvisor, the below configuration will need to be added inside the `prometheus.yml` file so that Prometheus has the endpoint where it needs to scrape the metrics:
```jsx


  - job_name: docker
    scrape_interval: 5s
    metrics_path: /metrics
    static_configs:
      - targets: ['host.docker.internal:9323']  


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;where;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;job_name: docker:&lt;/code&gt;&lt;/strong&gt; This is the name you're giving to the job. It's a label that helps you identify this particular job in Prometheus.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;scrape_interval: 5s:&lt;/code&gt;&lt;/strong&gt; This specifies the interval at which Prometheus will scrape (collect) metrics from the specified target(s). In this case, it's set to 5 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;metrics_path: /metrics:&lt;/code&gt;&lt;/strong&gt; This is the path at which Prometheus will request metrics from the target(s). In many applications and services, a common convention is to expose metrics at the "/metrics" endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;static_configs:&lt;/code&gt;&lt;/strong&gt; This section defines a list of static targets that Prometheus will scrape. Each target is specified as a dictionary with its own configuration options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;targets: ['host.docker.internal:9323']:&lt;/code&gt;&lt;/strong&gt; This is the target you're configuring Prometheus to scrape metrics from. In this case, the target is &lt;code&gt;host.docker.internal&lt;/code&gt; (which is a special DNS name used to refer to the host machine from within Docker containers) and the port 9323.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;monitoring&lt;/code&gt; directory, create a &lt;code&gt;.env file&lt;/code&gt; which will contain environment variables for the Grafana container:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Paste the below contents into the &lt;code&gt;. env&lt;/code&gt; file:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GF_SECURITY_ADMIN_USER=&lt;br&gt;
GF_SECURITY_ADMIN_PASSWORD=&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When you run Grafana for the first time, the default username and password is `admin` but for security reasons, we need to override that and set our own username and password. Be sure to give and username and password in the above.

Now we will create a configuration file that the MySQLd Exporter will use in accessing the MySQL server.

- In the same monitoring directory, create a `.my.cnf` file:
```jsx


touch .my.cnf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Paste the below into it:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;[client]&lt;br&gt;
user = root&lt;br&gt;
password = Strongpassword@123&lt;br&gt;
host = mysql&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If you recall, we used these variables when creating the MySQL server container. If you changed your variables, ensure to replace the above configuration file with the appropriate ones.

## Alerting and Notification: 
We will configure alerts to notify us in case of unusual behaviour or performance decline within the containers. Alerts can be established either through Prometheus or within Grafana. For now, our emphasis will be on configuring alerts in Prometheus. 
Additionally, there are various methods to configure alerts, including email, Slack, Teams, etc. You can learn more about them [here](https://prometheus.io/docs/alerting/latest/configuration/).
For the scope of this article, we will configure alerting for Slack.

### Setup Slack Channel
To setup alerting for Slack, we will need a Slack channel in a workspace that we have admin access to. You can learn how to create a new Slack workspace [here](https://slack.com/help/articles/206845317-Create-a-Slack-workspace).

If you already have a workspace, you can proceed to creating a new channel in the workspace.

- Create a channel in the workspace and give it a name like `alertmanager` or any name of your choice. Click on the channel name at the top.

![New Channel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ki74x8ltdbi30g6w01wt.png)

- You will be presented with the below screen. Select `Integrations` and under Apps, click on the `Add an App` button.
![Add an App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6e8bahzphprir1rfhrrf.png)

- Search for `Incoming Webhooks` and click the `install` button.

![Incoming Webhooks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbmio0ettudkpkyc3nd8.png)

- You will be redirected to your browser, click the `Add to Slack` button.
![Add to Slack](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/692kzk2v1ucnu3w44c0c.png)

- Search for the channel you want the Incoming Webhook app to be added to and select it.

![Add Webhook to channel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6shvi2xvshr1kzkdao4.png)

- You will be presented with the settings for it, scroll to the bottom and click the `Save Settings` button.

- Scroll down again and copy the Webhook URL already generated for you, you can also choose to customize the icon for it.

![Webhook URL](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9s9yiqhn4zfkfzcr104w.png)

- Return back to Slack and you should see a message showing that an integration has been added to the channel.

![Webhook Added to Channel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46s4o2x7n1tohnuwy2go.png)

Now, the Slack channel has been prepped to receive alerts.

### Alertmanager
In production environments, downtimes occur frequently or less, and ensuring that you know about it before users do is important, hence why we will be setting up alerting for our containers using Alertmanager. 

[Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) is an open-source component that works in conjunction with Prometheus for managing and sending alerts in a more sophisticated and organized manner. It is responsible for handling alerts generated by Prometheus and other sources, routing them to appropriate receivers, and deduplicating or grouping them as needed. It ensures that alerts are delivered reliably to the right individuals or systems for timely response.

We will be configuring Alertmanager to send alerts generated by Prometheus to the Slack channel we just created.

- Inside the monitoring directory, create a folder called `alertmanager` and create file called `alertmanager.yml` inside it:
```jsx


mkdir alertmanager
cd alertmanger
touch alertmanager.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Paste the below configuration into the &lt;code&gt;alertmanager.yml&lt;/code&gt; file:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;route&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;group_by&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;receiver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;alert&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;
  &lt;span class="nx"&gt;group_interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
  &lt;span class="nx"&gt;repeat_interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
  &lt;span class="nx"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;slack&lt;/span&gt;
        &lt;span class="nx"&gt;receiver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;alert&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;

&lt;span class="nx"&gt;receivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;alert&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;
  &lt;span class="nx"&gt;slack_configs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;api_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;insert&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;Webhook&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt; &lt;span class="nx"&gt;copied&lt;/span&gt; &lt;span class="nx"&gt;earlier&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#alertmanager&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="nx"&gt;icon_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//avatars3.githubusercontent.com/u/3380462&lt;/span&gt;
    &lt;span class="nx"&gt;send_resolved&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;!channel&amp;gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;summary: {{ .CommonAnnotations.summary }}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;description: {{ .CommonAnnotations.description }}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Break down of the configuration:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;group_by: [cluster]:&lt;/code&gt;&lt;/strong&gt; This indicates that alerts should be grouped based on the "cluster" label. Alerts with the same cluster label value will be grouped together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;receiver: alert-test:&lt;/code&gt;&lt;/strong&gt; This specifies the default receiver for alerts. When an alert is triggered, it will be sent to the "alert-test" receiver.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;group_interval: 30s:&lt;/code&gt;&lt;/strong&gt; This sets the interval for grouping alerts with the same label. In this case, alerts with the same cluster label value will be grouped every 30 seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;repeat_interval: 30s:&lt;/code&gt;&lt;/strong&gt; This sets the interval at which alerts are repeated. If an alert is still active, it will be sent again every 30 seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;routes:&lt;/code&gt;&lt;/strong&gt; This section defines routes based on specific conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;match:&lt;/code&gt;&lt;/strong&gt; This indicates that a route should be created when specific conditions are met.
severity: slack: This specifies that the route applies to alerts with the "slack" severity label.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;receiver: alert-test:&lt;/code&gt;&lt;/strong&gt; This route uses the "alert-test" receiver to send alerts matching the specified conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Receiver Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;name: alert-test:&lt;/code&gt;&lt;/strong&gt; This defines the name of the receiver, which is used to identify it in the route and alerts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;slack_configs:&lt;/code&gt;&lt;/strong&gt; This section configures the Slack notification settings for this receiver.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;api_url:&lt;/code&gt;&lt;/strong&gt; This is the Slack incoming webhook URL used to send notifications to Slack. Remember to paste the Webhook URL previously copied from the Webhook integration for the &lt;code&gt;alertmanager&lt;/code&gt; slack channel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;channel:&lt;/code&gt;&lt;/strong&gt; This specifies the Slack channel where notifications will be sent, which in this case is the &lt;code&gt;alertmanager&lt;/code&gt; channel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;icon_url:&lt;/code&gt;&lt;/strong&gt; This URL points to an icon image for the Slack notification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;send_resolved:&lt;/code&gt;&lt;/strong&gt; true: This setting ensures that resolved alerts (alerts that return to a healthy state) are also sent as notifications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;text:&lt;/code&gt;&lt;/strong&gt; This is the content of the notification message. It uses template variables to include the alert's summary and description.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Return back to the Prometheus directory&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;rules.yml&lt;/code&gt; file in the &lt;code&gt;Prometheus&lt;/code&gt; directory and paste in the below configuration:&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;These rules define alerts that monitor the health and performance of containers and MySQL databases.&lt;/p&gt;

&lt;p&gt;For the containers, the alerts include detecting when a Cadvisor container is down, when a container has been killed, is absent, experiences high memory usage, high throttle rate, low CPU utilization, or low memory usage.&lt;/p&gt;

&lt;p&gt;For the the MySQL database, the alerts encompass scenarios where MySQL is down, there are too many connections (&amp;gt;80%), high threads running (&amp;gt;60%), slow queries, InnoDB log waits, or when MySQL has been restarted recently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;For container orchestration, automation and simplicity, we will use a Docker Compose file to run all the containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the root of the project folder, create a new directory called &lt;code&gt;monitoring&lt;/code&gt; and in that directory create a &lt;code&gt;docker compose file&lt;/code&gt;:
```jsx
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;mkdir monitoring&lt;br&gt;
cd monitoring&lt;br&gt;
touch docker-compose.yml&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Paste the below contents into the `docker-compose.yml` file:

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;h2&gt;
  
  
  Run the Containers
&lt;/h2&gt;

&lt;p&gt;To run the containers defined in the Docker Compose file, run the below command from inside the monitoring directory:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;compose&lt;/span&gt; &lt;span class="nx"&gt;up&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;


&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/code&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nx"&gt;pre&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="na"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;This command automates the process of checking for the images specified in the file. If these images are already present on the system, the command proceeds to create and launch the corresponding containers. However, if the images are not available, it fetches them before initiating container creation. The command then ensures that these containers are executed in the background, freeing up the terminal for other tasks.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"monitoring"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#monitoring"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Monitoring
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Now that the containers are up and running, we can monitor them and get insights.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"alerts-in-slack"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#alerts-in-slack"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Alerts in Slack
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Immediately the containers are brought up with Docker Compose, you should receive an alert in your Slack channel.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4e8gmkch88lyqzwy3f9j.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Initial Alert"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The alert mentioned above originates from the MySQLd Exporter and serves to notify you when the MySQL server is either inactive or not functioning. In the previous article, we ran the MySQL container in conjunction with the PHP and phpMyAdmin containers. If the containers are still running, the alert won't trigger. In a situation that all three containers were stopped, the MySQLd Exporter identifies the absence of the MySQL container which runs the MySQL server and, noticing that it's non-operational, sends an alert.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The alert will continue to be sent at intervals to the channel until the issue has been resolved depending on the duration set. &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;If you take a close look at the rule alert&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"highlight"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;pre&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"highlight jsx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

 &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"o"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;-&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;alert&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;ContainerKilled&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;expr&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"dl"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;'&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"s1"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;time() - container_last_seen &lt;span class="ni"&gt;&amp;amp;gt;&lt;/span&gt; 60&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"dl"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;'&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"k"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;for&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"mi"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;0&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;m&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;labels&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;severity&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;warning&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;annotations&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;summary&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Container&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nf"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;killed &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;(&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;instance&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/span&amp;gt; &amp;lt;span class="nx"&amp;gt;$labels&amp;lt;/&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;instance&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;}})&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"nx"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;description&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"dl"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;"&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"s2"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;A container has disappeared&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"se"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;\n&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"s2"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;  VALUE = &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;$value&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"se"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;\n&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"s2"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;  LABELS = &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;$labels&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"dl"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;"&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;


&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;pre&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="na"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;for: 0m&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; indicates that the alert condition is evaluated at the current moment (time()), and if it's true, the alert will be immediately fired. In other words, the alert doesn't require the condition to persist for a specific duration before being triggered; it will fire as soon as the condition is met.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;In this specific case, the condition &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;time() - container_last_seen &lt;span class="ni"&gt;&amp;amp;gt;&lt;/span&gt; 60&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; checks if the time elapsed since the last time the container was seen is greater than 60 seconds. If this condition becomes true at any given moment, the ContainerKilled alert will be fired immediately.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Now, when we bring the MySQL container back up, we will get an alert stating that the issue has been resolved:&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;br&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5z2ojpxzi008uuzvyh73.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Resolved"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"cadvisor"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#cadvisor"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  cAdvisor
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To access the cAdvisor container, visit &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http:localhost:8080&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lobzsj0139xgxwl6yuei.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"cadvisor"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To see the metrics it is gathering about your containers from the Docker Daemon, click on &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Docker Containers&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bmy6eku9xiw3d0eoylm2.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"cadvisor docker container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"prometheus"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#prometheus"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Prometheus
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To access the Prometheus GUI, visit &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http://localhost:9090&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p4a2z7l9k36qg48j0zkq.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Prometheus"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Click on &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Status&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; as shown in the above image, and select &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Targets&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; to see the endpoints or data sources that Prometheus is scraping (retrieving) metrics from:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyxqtvmb87a34552gy2c.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Targets"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;This shows that all the endpoints are active and Prometheus can reach them&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To see the rules we defined in the Prometheus directly, click on &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Status&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; again and select &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Rules&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0u9ydi1p82wssh850oiu.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Rules"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;All the rules defined are shown here.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To see the Alerts, Click on &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Alerts&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3amhrglmceof8pyzvzt.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Alerts"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;From the above we can see all the alerts from the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;rules.yml&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; file we configured earlier. We can also see the "inactive," "pending," and "firing" status bars, which are used to describe the states of alerts and how they are managed within the alerting system.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Inactive:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; An alert is in the "inactive" state when the conditions specified in its alerting rule are not currently met. In other words, the metric or metrics being monitored have not crossed the defined threshold or condition to trigger the alert. Inactive alerts do not generate any notifications or actions.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Pending:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; When an alert transitions from "inactive" to "firing," it goes through a "pending" state for a duration defined in the alerting rule. The "pending" state helps prevent alerts from rapidly toggling between "inactive" and "firing" due to minor fluctuations in metric values. During the "pending" state, no additional notifications are sent, and the alert is not escalated. If the condition persists for the defined duration, the alert transitions to the "firing" state.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Firing:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; An alert is in the "firing" state when the conditions specified in its alerting rule are met and sustained over the duration of the "firing" state. This is the state where the actual alert notifications are generated and sent to configured receivers, such as email addresses or external services. The alert will remain in the "firing" state as long as the condition persists. Once the condition is resolved and the metric values no longer trigger the alert, it will transition back to the "inactive" state.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;From the above image, we have 17 alerts in a Pending state due to &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;ContainerLowCpuUtilization&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; and &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;ContainerLowMemoryUsage&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;If we look into the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;ContainerLowCpuUtilization&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;, we can identify the containers with low CPU utilization:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hii11drp8wwav77wruzw.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"low CPU utilization"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;This above image reveals the container names exhibiting minimal CPU usage. The alert criteria was set to identify containers with CPU utilization below 20% over a span of one week. As the duration has not yet reached a full week, all containers are currently indicated as having CPU usage below the 20% threshold.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"node-exporter"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#node-exporter"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Node exporter
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;A Node Exporter is a utility used in conjunction with the Prometheus monitoring system to collect and expose metrics from an operating system's hardware and system resources. It allows you to monitor various aspects of the host machine's performance, resource utilization, and health.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To see host OS level metrics, access the node exporter at &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http://localhost:9100&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; and Click on the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Metrics&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; link displayed:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikznlff3iafqajjs5pel.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Node Exporter"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;NB:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;strong&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; Our focus is not on the host OS metrics but on containers, we are only using the node exporter here to get better insights on monitoring and scraping with Prometheus.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"alertmanager"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#alertmanager"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Alertmanager
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The alert manager can be accessed at &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http://localhost:9093&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3bfdvzkmssfucwx30mqq.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Alertmanager"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;All alerts rules that moved from the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;inactive&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; state to the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;firing&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; state will appear here and the Alertmanager will send a alert to the Slack channel depending on the duration specified for that alert rule.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"mysqld-exporter"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#mysqld-exporter"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  MySQLd Exporter
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://github.com/prometheus/mysqld_exporter"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;MySQL Exporter&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; is a tool used in conjunction with the Prometheus monitoring system to collect and expose metrics from MySQL database servers. It allows you to monitor various aspects of your MySQL database's performance and health, enabling you to detect potential issues, track trends, and set up alerts based on the collected metrics.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The MySQLd Exporter can be accessed at &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http://localhost:9104&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmp8djz23ep3vkknbey4.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"MySQLd Exporter"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"visualisation"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#visualisation"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Visualisation
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;To gain deeper insights and enhance our container monitoring, we will be leveraging Grafana, a visualisation tool.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"grafana"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#grafana"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Grafana
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"http://grafana.com/oss/grafana/"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Grafana&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; is an open-source platform used for monitoring, visualisation, and analysis of time-series data. It enables users to create interactive and customisable dashboards that display metrics and data from various sources, making it easier to understand and interpret complex data trends and patterns.  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;We will be integrating the data and metrics generated from Prometheus into Grafana so that we can visualise it better.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The Grafana container has already been created and started, in your browser, visit &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http:localhost:3000&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; to login to Grafana.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fpr500dw0rf9nwb2sjh1.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Grafana Login"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Enter the admin user and password that was passed into the container as environment variables and you should be able to login.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l82lmbdnz9iwl166nz9u.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Grafana Logged In"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Click on "Add your first data source" and select "Prometheus"&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30bheoutop4loigyh5c8.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Add data source"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Give it a name or you can stick with the default name, fill in the Prometheus URL which will be &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;http://prometheus:9090&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;. To be sure of the URL, you can check back the targets in Prometheus and the endpoint for Prometheus, that will be the URL to fill in&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/si4rsrdoc44tpjez9kqf.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Prometheus"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Scroll down to the bottom and click on the "Save and Test" button, you should get a successfully queries message, if not, check the URL again for mistakes.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kd2262w3dcq09bkuha51.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Endpoint"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Now go back to "Home" by clicking on the hamburger icon, then click on "Dashboards"&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqds7w6ejuhfz8p7y01o.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Home"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Click on the "New" button and select "Import"&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;br&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nssfms497kdnjfnlnbxc.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Dashboard"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The first dashboard we will be importing will be for the MySQLd exporter, it's an open source dashboard which can you get more details about &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://grafana.com/grafana/dashboards/14057-mysql/"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;here&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;. From the link, copy the ID number which is &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;14057&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; and paste into the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;import&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; box and click on the &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;load&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; button:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek5ubydcy7958ze8y8is.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Import"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;The MySQLd dashboard will be loaded with metrics from the MySQL database for visualisation:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orgs4itvnw7gik6d7gzi.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"dashboard"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Because the containers haven't been up for way too long, you might not see a lot of metrics displayed. We will can change the time range to &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;2m&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; to see all that has happened under 2 minutes:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4hddsk18j26961ncyyv.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"time range"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;After the time range has been reduced to 2 minutes, you should be able to see some metrics displayed in graphs, etc:&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94pt41btscvqg5f6fmlt.png"&lt;/span&gt; &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Metrics"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;With that set, you can visualise your database metrics with the prebuilt dashboard just set up.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"conclusion"&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#conclusion"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;a&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  Conclusion
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Throughout this comprehensive three-part article series, we covered a comprehensive setup of a LAMP stack application, containerization of the application, and effectively monitoring of the application.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>docker</category>
      <category>prometheus</category>
      <category>monitoring</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Containerizing a LAMP Stack Application</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Sun, 06 Aug 2023 15:24:23 +0000</pubDate>
      <link>https://forem.com/danielfavour/containerizing-a-lamp-stack-application-4pf1</link>
      <guid>https://forem.com/danielfavour/containerizing-a-lamp-stack-application-4pf1</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/danielfavour/setting-up-a-lamp-stack-application-1nii?preview=e2110a9d5a3ab5f630a9c4593ffe1e85442cb794f746025c28255cc9fec88a504b99fe00aa3ce3573de172c0b086d2e6b87ebfb8b868130e28b04efd#installation"&gt;article&lt;/a&gt;, we learned how to set up a LAMP stack application. In this article, we will be looking at containerizing the LAMP stack application.&lt;/p&gt;

&lt;p&gt;Containerization of an application involves packaging it along with its dependencies, configurations, and runtime environment into a Docker container. This encapsulation ensures that the application can run consistently across different environments and eliminates potential compatibility issues.&lt;/p&gt;

&lt;p&gt;For this process, our primary focus will be on the PHP script. Initially, we'll create a Dockerfile for the PHP application, allowing us to build the Docker image and subsequently run the container. To establish a database connection, we'll attach a MySQL container. Additionally, we'll introduce an extra container, phpMyAdmin, which will serve as a user-friendly GUI for accessing and managing the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Architecture diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7ifnd6ag76a037rocm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7ifnd6ag76a037rocm0.png" alt="container architecture" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please check the below embed to view the image in higher resolution:&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://user-images.githubusercontent.com/89241109/256974658-d62771cc-a614-4b45-acb6-903cb6119a63.png" rel="noopener noreferrer"&gt;
      user-images.githubusercontent.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;The diagram above illustrates the container architecture utilizing Docker Compose. The applications are executed with Docker Compose and are encapsulated within a Docker bridge network, facilitating smooth communication among them. The host OS provides a Docker Engine, making Docker functionality available for seamless container management and orchestration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Communication Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz2tfptdirmhyyqvj34y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz2tfptdirmhyyqvj34y.png" alt="Container Communication" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please check the below embed to view the image in higher resolution:&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://user-images.githubusercontent.com/89241109/257053008-1f06824c-f91f-42c0-9e43-5ea46aaa80a4.png" rel="noopener noreferrer"&gt;
      user-images.githubusercontent.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Approach to Running the Containers
&lt;/h3&gt;

&lt;p&gt;We will be employing 2 different methods to run the containers, without Docker Compose and with Docker Compose.&lt;/p&gt;

&lt;p&gt;In the "without Docker Compose" approach, we will rely on Makefiles to automate the container building and running processes. The Makefiles will handle tasks like pulling the required Docker images, creating and configuring containers, setting up networking, and other necessary actions. This method allows us to manage the containerization workflow efficiently, automating key steps with concise and easily maintainable scripts.&lt;/p&gt;

&lt;p&gt;On the other hand, the "with Docker Compose" approach involves utilizing Docker Compose, a powerful tool for defining and managing multi-container applications. With Docker Compose, we can specify the services, networks, volumes, and configurations required for our application within a single YAML file. This streamlines the entire process, making it easier to deploy and manage multiple containers in a cohesive manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;The project structure should be identical to the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── .env
├── docker-compose.yml
├── install.sh
├── mysql
│   ├── db.sql
│   └── makefile
├── php
│   ├── .env
│   ├── Dockerfile
│   ├── form.html
│   ├── form_submit.php
│   └── makefile
├── setup.sh
└── vagrantfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Without Docker Compose (Makefile Approach)
&lt;/h2&gt;

&lt;p&gt;A Makefile is a file used with the Unix utility "make" to automate the build process of software projects. It contains instructions, typically written in shell commands, and is named "makefile" or "Makefile" depending on the system. Each rule in the Makefile has a target, dependencies, and commands. When you run the "make" command, it reads the Makefile and executes the specified commands to build the software according to the defined rules.&lt;/p&gt;

&lt;p&gt;In the context of containerizing our application, a Makefile serves as a valuable automation tool. It streamlines the entire containerization process by automating the necessary tasks. The Makefile is designed to install essential tools, lint both the Dockerfile and PHP application code, build the Docker image, and run the container image.&lt;/p&gt;

&lt;p&gt;The primary objective behind using a Makefile is to achieve seamless automation, making the development workflow more efficient and consistent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create PHP directory
&lt;/h3&gt;

&lt;p&gt;In the root of the project folder, create a &lt;code&gt;php&lt;/code&gt; directory and move the &lt;code&gt;form_submit.php&lt;/code&gt; and &lt;code&gt;form.html&lt;/code&gt; files into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;
&lt;span class="nx"&gt;mv&lt;/span&gt; &lt;span class="nx"&gt;form_submit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt;
&lt;span class="nx"&gt;mv&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;p&gt;We will create environment variables in the php folder. These variables will be used for the form_submit.php. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;.env file&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below into the file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MYSQL_PASSWORD=Strongpassword@123
DB_HOST=mysql
MYSQL_DATABASE=dev_to
DB_USER=root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to change the values and use values of your choice but be consistent with it across the environment. These variables are what php will use to communicate with the MySQL server for data storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Write the Dockerfile
&lt;/h3&gt;

&lt;p&gt;To containerize the PHP application, we need to create a Dockerfile. In the php directory, create a new file called Dockerfile without any file extension. Dockerfiles don't require extensions.&lt;/p&gt;

&lt;p&gt;Now, paste the following content into the Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;FROM&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;7.4&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;apache&lt;/span&gt;

&lt;span class="nx"&gt;WORKDIR&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;

&lt;span class="nx"&gt;RUN&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;mysqli&lt;/span&gt; &lt;span class="nx"&gt;pdo&lt;/span&gt; &lt;span class="nx"&gt;pdo_mysql&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;enable&lt;/span&gt; &lt;span class="nx"&gt;mysqli&lt;/span&gt;

&lt;span class="nx"&gt;COPY&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;
&lt;span class="nx"&gt;COPY&lt;/span&gt; &lt;span class="nx"&gt;form_submit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;

&lt;span class="nx"&gt;RUN&lt;/span&gt; &lt;span class="nx"&gt;chown&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;R&lt;/span&gt; &lt;span class="nx"&gt;www&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;www&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/www/&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;a2enmod&lt;/span&gt; &lt;span class="nx"&gt;rewrite&lt;/span&gt;

&lt;span class="nx"&gt;EXPOSE&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breakdown of what each line means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;FROM php:7.4-apache&lt;/code&gt;:&lt;/strong&gt; This line sets the base image for our container. It uses the official PHP image with Apache server, version 7.4. This base image already includes PHP and Apache, making it convenient for hosting PHP applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;WORKDIR /var/www/html&lt;/code&gt;:&lt;/strong&gt; The &lt;code&gt;WORKDIR /var/www/html&lt;/code&gt; instruction sets the working directory inside the container to &lt;code&gt;/var/www/html&lt;/code&gt;. It provides a context for file operations, and subsequent commands like &lt;code&gt;COPY&lt;/code&gt; or &lt;code&gt;RUN&lt;/code&gt; will be executed relative to this directory. In this case, it is set to the common location for web application files in Apache servers, and the following COPY commands copy the PHP application files into this directory inside the container for use by the Apache web server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;RUN docker-php-ext-install mysqli pdo pdo_mysql &amp;amp;&amp;amp; docker-php-ext-enable mysqli&lt;/code&gt;:&lt;/strong&gt; This line uses the &lt;code&gt;RUN&lt;/code&gt; instruction to execute commands during the Docker image build process. Here, we are installing PHP extensions mysqli, pdo, and pdo_mysql required for database connections. The &lt;code&gt;docker-php-ext-enable&lt;/code&gt; command is used to enable the mysqli extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;COPY form.html /var/www/html/index.html&lt;/code&gt;:&lt;/strong&gt; The COPY instruction copies the &lt;code&gt;form.html&lt;/code&gt; file from the host (your local machine) to the container's &lt;code&gt;/var/www/html&lt;/code&gt; directory. In this case, it is renamed to index.html, serving as the default HTML page when accessing the root URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;COPY form_submit.php /var/www/html&lt;/code&gt;:&lt;/strong&gt; Similarly, this line copies the &lt;code&gt;form_submit.php&lt;/code&gt; file from the host to the container's &lt;code&gt;/var/www/html&lt;/code&gt; directory. This PHP file handles the form submissions from the &lt;code&gt;form.html&lt;/code&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;RUN chown -R www-data:www-data /var/www/html &amp;amp;&amp;amp; a2enmod rewrite&lt;/code&gt;:&lt;/strong&gt; Here, we use the &lt;code&gt;RUN&lt;/code&gt; instruction to set the ownership of the &lt;code&gt;/var/www/html&lt;/code&gt; directory to the www-data user and group. Apache typically runs under the www-data user, so this ensures proper permissions for serving the website content.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second part of this line uses the &lt;code&gt;a2enmod&lt;/code&gt; command to enable the Apache module rewrite. The rewrite module is needed to allow URL rewriting, enabling cleaner URLs and better routing for the PHP application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;EXPOSE 80&lt;/code&gt;:&lt;/strong&gt; The &lt;code&gt;EXPOSE&lt;/code&gt; instruction is a metadata declaration that indicates which network ports the container will listen on during runtime. In this case, we specify that the container will listen on port 80. However, this does not actually publish the port to the host machine; it only serves as documentation for the user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create the Environment Variables File
&lt;/h3&gt;

&lt;p&gt;In the php folder directory, create a file called &lt;code&gt;.env&lt;/code&gt;, it will be used to store environment variables for the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the below contents into it, ensure to fill it up with your desired values but leave the DB_Host as mysql and the DB_USER as root:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MYSQL_PASSWORD=
DB_HOST=mysql
MYSQL_DATABASE=
DB_USER=root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Write the Makefile
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To create the &lt;code&gt;Makefile&lt;/code&gt; file, run the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;Makefile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below configuration into the makefile:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Makefile&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;PHP&lt;/span&gt; &lt;span class="nx"&gt;Dockerfile&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;PHP&lt;/span&gt; &lt;span class="nx"&gt;Code&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SILENT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;Homebrew&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt; &lt;span class="nx"&gt;otherwise&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Installing Homebrew...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="sr"&gt;/bin/&lt;/span&gt;&lt;span class="nx"&gt;bash&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Homebrew installed!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Homebrew is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;Hadolint&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt; &lt;span class="nx"&gt;otherwise&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;hadolint&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Installing Hadolint...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;hadolint&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hadolint installed!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hadolint is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;wget&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt; &lt;span class="nx"&gt;otherwise&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;wget&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Installing wget...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;wget&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;wget installed!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;wget is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt; &lt;span class="nx"&gt;otherwise&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Installing PHP...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;brew&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PHP installed!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PHP is already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Install&lt;/span&gt; &lt;span class="nx"&gt;Xcode&lt;/span&gt; &lt;span class="nx"&gt;Developer&lt;/span&gt; &lt;span class="nx"&gt;Tools&lt;/span&gt; &lt;span class="nx"&gt;but&lt;/span&gt; &lt;span class="nx"&gt;check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;they&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;already&lt;/span&gt; &lt;span class="nx"&gt;installed&lt;/span&gt; &lt;span class="nx"&gt;first&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;xcode&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;select&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;amp;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Installing Xcode Developer Tools...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;xcode&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;select&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Xcode Developer Tools are already installed.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SILENT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="nx"&gt;lint_dockerfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Lint&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;Dockerfile&lt;/span&gt; &lt;span class="nx"&gt;using&lt;/span&gt; &lt;span class="nx"&gt;hadolint&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;See&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt; &lt;span class="nx"&gt;hadolint&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;instructions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/hadolint/hadolint&lt;/span&gt;
    &lt;span class="nx"&gt;hadolint&lt;/span&gt; &lt;span class="nx"&gt;Dockerfile&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Print&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;successful&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Dockerfile linting completed successfully. No errors found, Dockerfile follows best practices.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;



&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SILENT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    
&lt;span class="nx"&gt;lint_php&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;  &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Enable&lt;/span&gt; &lt;span class="nx"&gt;verbose&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;PHPCS&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;PHPCBF&lt;/span&gt; &lt;span class="nx"&gt;linter&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;downloaded&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;otherwise&lt;/span&gt; &lt;span class="nx"&gt;download&lt;/span&gt; &lt;span class="nx"&gt;them&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcs.phar&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Downloading phpcs.phar...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;wget&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//squizlabs.github.io/PHP_CodeSniffer/phpcs.phar; \&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcs.phar downloaded!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcs.phar is already downloaded.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;For&lt;/span&gt; &lt;span class="nx"&gt;PHPCBF&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcbf.phar&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Downloading phpcbf.phar...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;wget&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar; \&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcbf.phar downloaded!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phpcbf.phar is already downloaded.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Make&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;downloaded&lt;/span&gt; &lt;span class="nx"&gt;PHAR&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="nx"&gt;executable&lt;/span&gt;
    &lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="nx"&gt;phpcs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phar&lt;/span&gt;
    &lt;span class="nx"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="nx"&gt;phpcbf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phar&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Lint&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;check&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt; &lt;span class="nx"&gt;errors&lt;/span&gt;
    &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="nx"&gt;phpcs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;phar&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;standard&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;PSR12&lt;/span&gt; &lt;span class="nx"&gt;form_submit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Continue&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;other&lt;/span&gt; &lt;span class="nx"&gt;targets&lt;/span&gt; &lt;span class="nx"&gt;by&lt;/span&gt; &lt;span class="nx"&gt;recursively&lt;/span&gt; &lt;span class="nx"&gt;invoking&lt;/span&gt; &lt;span class="s2"&gt;`make`&lt;/span&gt;
    &lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;MAKE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt;


&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Build&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="nx"&gt;using&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;Dockerfile&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;directory&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Create&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;but&lt;/span&gt; &lt;span class="nx"&gt;check&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;exists&lt;/span&gt; &lt;span class="nx"&gt;first&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;inspect&lt;/span&gt; &lt;span class="nx"&gt;test_network&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="sr"&gt;/dev/&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;amp;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
        &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="nx"&gt;test_network&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;\&lt;/span&gt;
    &lt;span class="nx"&gt;fi&lt;/span&gt;


&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;

    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;PMA_ARBITRARY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;PMA_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt;


&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Target&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;entire&lt;/span&gt; &lt;span class="nx"&gt;setup&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;
&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;lint_dockerfile&lt;/span&gt; &lt;span class="nx"&gt;lint_php&lt;/span&gt; &lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's breakdown the configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;.SILENT&lt;/code&gt;:&lt;/strong&gt; In a Makefile, the .SILENT special target is used to suppress the normal echoing of commands that are executed during the build process. When you include .SILENT in your Makefile, it tells Make to operate in silent mode for the rules that follow it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, when you run make, it displays each command it executes in the terminal. This can be helpful for understanding what's happening during the build process, but it can also lead to a lot of noise, especially for simple and repetitive commands.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;install&lt;/code&gt;:&lt;/strong&gt; This target automates the installation of essential tools and dependencies, including Homebrew, Hadolint, wget, php, and Xcode Developer Tools. The &lt;code&gt;@&lt;/code&gt; symbol before each command suppresses the output of the command, providing a cleaner output during the installation process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each tool, the Makefile checks if it is already installed using command -v followed by the tool's name. If the tool is not found, it proceeds with the installation by fetching the necessary installation script or using Homebrew to install the tool. This automation ensures a smooth and efficient setup of the development environment, allowing developers to focus on the project without the hassle of manual tool installations.&lt;/p&gt;

&lt;p&gt;For example, when you run the command &lt;code&gt;make install&lt;/code&gt;, it should return the below output if the tools are already installed, if otherwise, it will proceed to install them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User-demo:php daniel$ make install
Homebrew is already installed.
Hadolint is already installed.
wget is already installed.
PHP is already installed.
Xcode Developer Tools are already installed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;lint_dockerfile&lt;/code&gt;:&lt;/strong&gt; This target employs &lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;Hadolint&lt;/a&gt;, a Dockerfile linter, to enforce best practices for writing Dockerfiles. When the make lint_dockerfile command is executed, it automatically lints the Dockerfile in the current directory, ensuring it adheres to industry-standard guidelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The purpose of this target is to provide developers with a quick and automated way to validate their Dockerfiles. If the Dockerfile is free from issues or errors, Hadolint does not produce any output, potentially leading to confusion. To address this, the target includes a specified echo message to indicate that the Dockerfile has passed the linting process successfully. &lt;br&gt;
In case the Dockerfile contains issues or errors, the target will display the relevant error output in the terminal as specified by Hadolint, providing developers with valuable feedback to rectify any non-compliant code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;lint_php&lt;/code&gt;:&lt;/strong&gt; This target is responsible for linting the PHP code using &lt;a href="https://github.com/squizlabs/PHP_CodeSniffer" rel="noopener noreferrer"&gt;PHP_CodeSniffer&lt;/a&gt; (PHPCS). This target serves as a linter for PHP code, ensuring it adheres to PHP coding standards, particularly the PSR-12 standard. To provide more insight into the execution process, the &lt;code&gt;set -x&lt;/code&gt;  command enables verbose output, displaying the actual commands executed in the terminal during the target's execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The target begins by checking if the &lt;code&gt;phpcs.phar&lt;/code&gt; and &lt;code&gt;phpcbf.phar&lt;/code&gt; files exist in the current directory. If not, it proceeds to download them using the wget command from their official URLs. The &lt;code&gt;chmod +x&lt;/code&gt; command is then used to make the downloaded PHAR files (phpcs.phar and phpcbf.phar) executable, allowing them to be run as commands.&lt;/p&gt;

&lt;p&gt;Next, the linter executes the phpcs.phar command with the &lt;code&gt;--standard=PSR12&lt;/code&gt; option, analyzing the &lt;code&gt;form_submit.php&lt;/code&gt; file for any violations of the PSR-12 standard. If there are errors or warnings, they will be displayed in the terminal. The &lt;code&gt;|| true&lt;/code&gt; at the end ensures that the Makefile execution continues even if this command encounters failures. This is done to prevent the linter's failure from halting the entire Makefile execution, allowing the focus to remain on the Docker-related tasks. If we have interest in fixing the code errors, we will use the &lt;code&gt;phpcbf.phar&lt;/code&gt; to fix them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;build&lt;/code&gt;:&lt;/strong&gt; This target automates the process of building a Docker image based on the Dockerfile located in the current working directory. The image is tagged with the name php-test. Additionally, it checks for the existence of a Docker network named test_network, and if it doesn't exist, the target creates it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;run&lt;/code&gt;:&lt;/strong&gt; This target is responsible for starting two Docker containers: one for PHP and another for phpMyAdmin&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the php container, the command uses docker run to start a new container, &lt;code&gt;-d&lt;/code&gt; flag runs the container in detached mode, meaning it runs in the background. The &lt;code&gt;--name php&lt;/code&gt; option assigns the name "php" to the container. The &lt;code&gt;--network test option&lt;/code&gt; connects the container to the Docker network named &lt;code&gt;test&lt;/code&gt;. The &lt;code&gt;-p 80:80&lt;/code&gt; option maps port 80 from the host to port 80 in the container, allowing access to the web server running inside the container. The &lt;code&gt;--env-file .env&lt;/code&gt; option specifies the location of an environment file (.env), which is in the current directory, that contains environment variables for the container. &lt;code&gt;php-test&lt;/code&gt; specifies the Docker image to use for the container. &lt;/p&gt;

&lt;p&gt;For phpmyadmin container, the command uses docker run to start another container, this time for phpMyAdmin. The &lt;code&gt;-d&lt;/code&gt; flag runs the container in detached mode. The &lt;code&gt;--name phpmyadmin-test&lt;/code&gt; option assigns the name &lt;code&gt;phpmyadmin-test&lt;/code&gt; to the container. The &lt;code&gt;--network test&lt;/code&gt; option connects the container to the same Docker network &lt;code&gt;test&lt;/code&gt; as the PHP container. The &lt;code&gt;-p 8000:80&lt;/code&gt; option maps port 8000 from the host to port 80 in the container, allowing access to phpMyAdmin's web interface.&lt;br&gt;
The &lt;code&gt;-e PMA_ARBITRARY=1&lt;/code&gt; and &lt;code&gt;-e PMA_HOST=mysql&lt;/code&gt; options are environment variables for phpMyAdmin container to configure its behaviour. &lt;code&gt;phpmyadmin&lt;/code&gt; specifies the Docker image to use for the phpMyAdmin container. The image is pulled from the Docker registry.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;all&lt;/code&gt;:&lt;/strong&gt; This target acts as a meta-target, referencing all the targets defined in the Makefile. When you execute the command &lt;code&gt;make all&lt;/code&gt;, it will sequentially execute each target in the order they are defined in the Makefile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To execute the Makefile and automate the build process, run the following command in the terminal:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;make&lt;/span&gt; &lt;span class="nx"&gt;all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The Makefile will automatically execute all the defined targets in the specified order, as described previously. This one-liner command saves you from manually performing individual tasks and ensures that the complete workflow, including installation, linting, Docker image building, and running containers, is handled seamlessly. &lt;/p&gt;

&lt;p&gt;After executing all the targets in the Makefile using make all, you can check your browser, &lt;code&gt;http://localhost:8000&lt;/code&gt;, to access the phpMyAdmin container. However, please note that the PHP and phpMyAdmin containers are not fully functional at this point since they still need to be connected to a MySQL database for full functionality.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create the MySQL database
&lt;/h3&gt;

&lt;p&gt;In the root of the project folder, create a folder called &lt;code&gt;mysql&lt;/code&gt;, cd into the directory and create two files: &lt;code&gt;db.sql&lt;/code&gt; and &lt;code&gt;makefile&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;
&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;
&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;makefile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below contents into the &lt;code&gt;db.sql&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE IF NOT EXISTS test (
    id INT NOT NULL AUTO_INCREMENT,
    name VARCHAR(255),
    email VARCHAR(255),
    message TEXT,
    PRIMARY KEY (id)
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is an SQL script that creates a table named &lt;code&gt;test&lt;/code&gt; with specific column definitions. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;CREATE TABLE IF NOT EXISTS&lt;/code&gt;:&lt;/strong&gt; This statement creates the test table only if it doesn't already exist in the database. This prevents any potential errors from re-creating an existing table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The test table has four columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;id:&lt;/code&gt;&lt;/strong&gt; An integer column with the NOT NULL constraint, meaning it cannot contain null values. It is also defined as &lt;code&gt;AUTO_INCREMENT&lt;/code&gt;, which automatically generates a unique value for each new row.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;name:&lt;/code&gt;&lt;/strong&gt; A variable-length character column with a maximum length of 255 characters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;email:&lt;/code&gt;&lt;/strong&gt; Another variable-length character column with a maximum length of 255 characters.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;message:&lt;/code&gt;&lt;/strong&gt; A column of the TEXT data type, which can store large amounts of text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste the below configuration into the &lt;code&gt;makefile&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Makefile&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;

&lt;span class="nx"&gt;pull&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;pull&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;

&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/lib/my&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;entrypoint&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;initdb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;favour&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;mypassword&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;Strongpassword&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;123&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="nx"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;dev_to&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="mi"&gt;3306&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3306&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Target&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;entire&lt;/span&gt; &lt;span class="nx"&gt;setup&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;
&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;pull&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Makefile provides a convenient way to set up a MySQL container using Docker and automates the process with the following targets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;pull:&lt;/code&gt;&lt;/strong&gt; The pull target is used to pull the latest MySQL Docker image (mysql:latest) from the Docker registry. This ensures that the latest version of the MySQL image is available locally for running the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;run:&lt;/code&gt;&lt;/strong&gt; The run target is responsible for starting the MySQL container. It uses docker run to create and run a new container named &lt;code&gt;mysql-test&lt;/code&gt;. The container is connected to a network called &lt;code&gt;test&lt;/code&gt; (--network test) and mounts a local SQL file (db.sql) to the container's &lt;code&gt;/docker-entrypoint-initdb.d/db.sql&lt;/code&gt; path. This allows the SQL file to be executed during container initialization, populating the database with any initial data. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, environment variables are provided for configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-e MYSQL_ROOT_PASSWORD=Strongpassword@123:&lt;/code&gt;&lt;/strong&gt; Sets the root user's password to &lt;code&gt;Strongpassword@123.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-e MYSQL_DATABASE=dev_to:&lt;/code&gt;&lt;/strong&gt; Creates a database named &lt;code&gt;dev_to.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;all:&lt;/code&gt;&lt;/strong&gt; The &lt;code&gt;all&lt;/code&gt; target acts as a meta-target, referencing both pull and run. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Keep it in mind that important credentials shouldn't be passed directly on the command line, we are passing them directly here just to show the process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create the database, run the makefile with the following command from inside the mysql directory:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;make&lt;/span&gt; &lt;span class="nx"&gt;all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you execute make all, it will execute both targets in the specified order. This allows you to pull the latest MySQL image and then run the MySQL container with the necessary configurations in a single command.&lt;/p&gt;

&lt;p&gt;Now on your browser, visit &lt;code&gt;localhost&lt;/code&gt; to see the form.html file being served by apache. When you fill the form, if the input you gave is successful saved in the database, you will be redirected to &lt;code&gt;localhost:80&lt;/code&gt; where php is running. You will get a php message saying it was successful, if it wasn't, php will still print out an error message.&lt;/p&gt;

&lt;h2&gt;
  
  
  With Docker-Compose
&lt;/h2&gt;

&lt;p&gt;After using Makefiles to run the containers as well as other processes, we will use Docker Compose which is an effective container orchestration tool to run the containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;p&gt;We need to first create environment variables for the MySQL container to use since it is not good practice to pass sensitive information directly.&lt;/p&gt;

&lt;p&gt;In the root of the folder, create a &lt;code&gt;.env&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below into it
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MYSQL_PASSWORD=mypassword
MYSQL_ROOT_PASSWORD=Strongpassword@123
DB_HOST=mysql
MYSQL_DATABASE=dev_to
DB_USER=favour
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace with the appropriate attributes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;Now that the environment variables have been set, we can proceed to writing the Docker Compose file.&lt;/p&gt;

&lt;p&gt;In the root of the project folder, create a file called &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Paste the below contents into the file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;3&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="nx"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;php&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;env_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;
    &lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;php&lt;/span&gt;
      &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;
      &lt;span class="nx"&gt;dockerfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;80:80&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CMD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;curl&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
      &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
      &lt;span class="nx"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;php&lt;/span&gt;
    &lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;

  &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;env_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;
    &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;
    &lt;span class="nx"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;always&lt;/span&gt;
    &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;3306:3306&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CMD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mysqladmin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ping&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-h&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;5s&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;10s&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;start_period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;3s&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
    &lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${MYSQL_ROOT_PASSWORD}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${MYSQL_DATABASE}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${DB_USER}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="nx"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${MYSQL_PASSWORD}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;
    &lt;span class="nx"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;entrypoint&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;initdb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mysql_data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="sr"&gt;/lib/my&lt;/span&gt;&lt;span class="nx"&gt;sql&lt;/span&gt;
    &lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;

  &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;phpmyadmin&lt;/span&gt;
    &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Expose&lt;/span&gt; &lt;span class="nx"&gt;phpMyAdmin&lt;/span&gt; &lt;span class="nx"&gt;on&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;
    &lt;span class="nx"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;always&lt;/span&gt;
    &lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;PMA_ARBITRARY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;arbitrary&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="nx"&gt;resolution&lt;/span&gt;
      &lt;span class="nx"&gt;PMA_HOST&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;${DB_HOST}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt; &lt;span class="nx"&gt;service&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;host&lt;/span&gt;
    &lt;span class="nx"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;mysql&lt;/span&gt;     
    &lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;

&lt;span class="nx"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above docker-compose file defines a multi-container environment using Docker Compose, allowing you to run and manage multiple services (containers) together as part of a single application.&lt;/p&gt;

&lt;p&gt;Breaking down the files to bits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Version:&lt;/code&gt;&lt;/strong&gt; The file specifies the version of Docker Compose syntax being used. In this case, it's using version 3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Services:&lt;/code&gt;&lt;/strong&gt; This section defines three services (containers): php, mysql, and phpmyadmin.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;php service:&lt;/code&gt;&lt;/strong&gt; The php service is built using the Dockerfile located in the ./php directory. It uses environment variables defined in the .env file. The service is accessible on port 80 and has a health check to test the health of the container by making an HTTP request to &lt;code&gt;http://localhost/&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;mysql service:&lt;/code&gt;&lt;/strong&gt; The mysql service uses the official MySQL image (mysql:latest) from Docker Hub. It reads environment variables from the .env file to set up MySQL's root password, database, user, and password. The container is accessible on port 3306, and it has a health check to verify the health of the MySQL server by pinging it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;phpmyadmin service:&lt;/code&gt;&lt;/strong&gt; The phpmyadmin service uses the official phpMyAdmin image (phpmyadmin:latest) from Docker Hub. It exposes phpMyAdmin on port 8000 and depends on the mysql service. Environment variables are set to configure phpMyAdmin to connect to the MySQL container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Volumes:&lt;/code&gt;&lt;/strong&gt; The mysql service mounts two volumes: &lt;code&gt;./mysql/db.sql&lt;/code&gt; to initialize the database with an SQL file and &lt;code&gt;./mysql/mysql_data&lt;/code&gt; to persist MySQL data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Networks:&lt;/code&gt;&lt;/strong&gt; The backend network is created to allow communication between the services (php, mysql, phpmyadmin).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To run the containers, in the root directory where the docker compose file exists, run the following command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command automatically builds the Docker image and runs the containers.&lt;/p&gt;

&lt;p&gt;== we didn't persist data for makefile command&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To bring down the containers:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Container Health Checks
&lt;/h3&gt;

&lt;p&gt;Going by standard and best practices, containers should have health checks to ensure they are running properly and responding to requests as expected. Health checks have already been defined in the Docker Compose file, ensuring that the containers are periodically monitored for their health status. When you run the command &lt;code&gt;docker ps -a&lt;/code&gt;, you should see the containers listed with their health status displayed under the &lt;code&gt;STATUS&lt;/code&gt; column.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS                    PORTS                               NAMES
eb6e3ddc3e53   phpmyadmin:latest     "/docker-entrypoint.…"   17 seconds ago   Up 15 seconds (healthy)   0.0.0.0:8000-&amp;gt;80/tcp                phpmyadmin
1f1abdcbd82d   mysql:latest          "docker-entrypoint.s…"   17 seconds ago   Up 15 seconds (healthy)   0.0.0.0:3306-&amp;gt;3306/tcp, 33060/tcp   mysql
2b91dfbbb74d   module-2-php          "docker-php-entrypoi…"   17 seconds ago   Up 15 seconds (healthy)   0.0.0.0:80-&amp;gt;80/tcp                  php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At initial start of the containers, the STATUS column will show (health: starting) since the container is in the process of starting, and the health check is currently being evaluated. The status of a container can also be &lt;code&gt;unhealthy&lt;/code&gt;. This status indicates that the container's health check has failed, signaling that there might be an issue with the container or its underlying application. If the status shows exited, it means that the container has stopped running, either because it has completed its task or due to an error.&lt;/p&gt;

&lt;p&gt;Health checks can also be done manually&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To run health checks manually on your container, for the &lt;code&gt;mysql&lt;/code&gt; container, run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect --format='{{json .State.Health.Status}}' &amp;lt;container name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and it should return an output of the container's health status&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F257146802-4cf90e33-0a92-40b4-ab66-ae122b56a3a2.png" class="article-body-image-wrapper"&gt;&lt;img alt="Screenshot 2023-07-26 at 11 01 58" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F257146802-4cf90e33-0a92-40b4-ab66-ae122b56a3a2.png" width="800" height="30"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do the same for the php container and then the phpmyadmin container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F257146481-dbdbd767-c3ed-4443-9223-282a76567b09.png" class="article-body-image-wrapper"&gt;&lt;img alt="Screenshot 2023-07-26 at 11 07 50" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F89241109%2F257146481-dbdbd767-c3ed-4443-9223-282a76567b09.png" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Beginners guide to GitOps and Flux</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Fri, 26 May 2023 12:37:22 +0000</pubDate>
      <link>https://forem.com/danielfavour/beginners-guide-to-gitops-and-flux-1di9</link>
      <guid>https://forem.com/danielfavour/beginners-guide-to-gitops-and-flux-1di9</guid>
      <description>&lt;p&gt;Managing code changes in a Kubernetes cluster can be complex, particularly when multiple applications are involved. Keeping track of changes, versions, and dependencies can be challenging, and conflicts can arise that impact cluster stability. &lt;/p&gt;

&lt;p&gt;GitOps provides a solution to these challenges by leveraging Git as the source of truth for all changes to the cluster. By committing all configuration changes and updates to a Git repository, GitOps provides a centralized location for tracking all changes to the cluster, while also providing a standardized approach to deploying and updating applications. &lt;/p&gt;

&lt;p&gt;In this article, we will explore the GitOps methodology and take a closer look at Flux, a popular GitOps tool used for managing Kubernetes clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GitOps?
&lt;/h2&gt;

&lt;p&gt;GitOps is a software development approach that emphasizes using Git as the primary tool for managing infrastructure and application deployments. Essentially, GitOps means that all changes to infrastructure and application code are made through Git commits and that these changes trigger automated deployment pipelines to update the running environment. This approach enables teams to easily track changes, and rollback deployments, and ensure that the production environment matches the desired state described in Git.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps principles
&lt;/h2&gt;

&lt;p&gt;The GitOps principles are essential for modern software development and deployment practices. These principles allow teams to automate the management of their infrastructure and applications, and ensure that their deployments are consistent and reliable. &lt;br&gt;
In this section, we will discuss the four (4) key principles of GitOps, which are:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The entire system is described declaratively
&lt;/h3&gt;

&lt;p&gt;GitOps mandates infrastructure and application code to be declared in a declarative way. A declarative approach involves specifying the desired end result rather than providing specific instructions on how to accomplish the task.&lt;/p&gt;

&lt;p&gt;For example, in Kubernetes, to describe the desired state of an application through a declarative approach, we define the desired state in a YAML file, and Kubernetes takes care of managing the underlying infrastructure to ensure that the application is running as intended.&lt;/p&gt;

&lt;p&gt;Say we want to deploy a simple web application in a Kubernetes cluster, we define the desired state of the application using a Kubernetes Deployment object, which includes the container image, number of replicas, and other specifications. Here is an example YAML file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;apps&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;v1&lt;/span&gt;
&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Deployment&lt;/span&gt;
&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;webapp&lt;/span&gt;
&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;webapp&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;webapp&lt;/span&gt;
    &lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;webapp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;container&lt;/span&gt;
          &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;webapp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;latest&lt;/span&gt;
          &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This YAML file describes the desired state of Kubernetes Deployment for our web application. &lt;br&gt;
We want to have three replicas of our container, which will be managed by a Kubernetes ReplicaSet. The container will listen on port 80, and the image used will be my-webapp-image:latest. &lt;br&gt;
Kubernetes will take care of ensuring that the desired state is met, even if there are changes in the underlying infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The desired system state is versioned in Git
&lt;/h3&gt;

&lt;p&gt;The desired state is the ideal state that developers and operators aim to achieve and maintain. This state is stored in Git to keep track of changes made to the system over time, revert to previous versions, and collaborate with others to make changes. By storing the desired state in Git, developers can maintain consistency, reliability, and security, while also working in an agile way to continually improve the system over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Approved changes can be automatically applied to the system
&lt;/h3&gt;

&lt;p&gt;After the desired state has been stored in Git, GitOps operators, also known as software agents, automatically retrieve the desired state from Git and apply it to one or more Kubernetes targets. A software agent is a software component that continuously monitors the changes made to a Git repository and triggers the deployment of those changes to a target environment, such as a Kubernetes cluster. This process occurs without the need for manual intervention, allowing for seamless and efficient deployment of the desired state.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Software agents ensure correctness and reconcile the system to match the desired state
&lt;/h3&gt;

&lt;p&gt;GitOps operators follow a continuous loop that involves observing the repository for any changes in the desired state. This process entails comparing the difference between the actual state and the desired state, after which the operators automatically take action to reconcile the two states.&lt;/p&gt;

&lt;h2&gt;
  
  
  The GitOps workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffup6feodvkoaguu4v35t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffup6feodvkoaguu4v35t.png" alt="GitOps Pipeline"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;GitOps Pipeline: Source &lt;a href="https://www.weave.works/technologies/gitops/#:~:text=What%20are%20the%20core%20principles%20of%20GitOps%3F%201,to%20ensure%20correctness%20and%20alert%20on%20divergence%20" rel="noopener noreferrer"&gt;Weaveworks&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Assuming that your code is stored on a Git repository and contains Kubernetes manifests such as helm charts, Kustomizations, etc, the task at hand is to retrieve these manifests from the repository and deploy them to your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Within your Kubernetes cluster, there exist multiple namespaces, services, and various cloud-native tools installed. A Kubernetes beginner typically resorts to the "Imperative" method of deployment, which involves utilizing the following command to apply a configuration file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f "filename"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Imperative approach involves providing specific instructions or commands on how to accomplish the task rather than specifying the desired end result. GitOps discourages this.&lt;/p&gt;

&lt;p&gt;One of the limitations of the imperative approach is that any modifications made to the Kubernetes manifest stored in the Git repository will not automatically reflect on the Kubernetes cluster. As a result, the kubectl apply command must be executed again to synchronize the cluster with the updated manifest.&lt;/p&gt;

&lt;p&gt;In a scenario where multiple team members are making changes to the manifests files in a Git repository, it can be a daunting task to monitor the changes taking place in the cluster. Tracking who deployed what resource, what was deployed, or when it was deployed can become a challenging feat, resulting in a state of confusion when an issue arises and needs to be resolved. The lack of knowledge of the root cause of the problem can lead to significant delays in fixing the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;By implementing a GitOps tool like Flux, you can efficiently track the changes in the cluster, which ensures that any issue that arises can be fixed promptly. With GitOps, you can know that any change made to the cluster is being tracked, and in the event of an issue, the cause can be easily identified, and the problem resolved quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;With GitOps you deploy an agent into your Kubernetes cluster. The agent in this case is Flux. Flux helps you to manage the resources in your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;As teams make changes to the Git Repository, they make changes just to the code. A CI/CD pipeline running in that repository tests the code changes made, and after passing the tests, the changes get deployed to the repository. Once the changes have been deployed, the agent living in your cluster is responsible to pull that change into your Kubernetes cluster and deploys it to the specific resource the change was made to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does the agent know about the Kubernetes manifest?&lt;/strong&gt;&lt;br&gt;
The agent responsible for deploying and managing Kubernetes resources can know about the Kubernetes manifest by: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reading the manifest files directly from a version control system, such as a Git repository, or a local file system.&lt;/li&gt;
&lt;li&gt;Receiving the manifest as input from a user or another system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, the Cluster administrator remains at the center of the process as he/she is the one who tells the agent where the manifest files are located. The agent is then responsible to detail who deployed what, what is deployed, and when it was deployed to the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Flux?
&lt;/h2&gt;

&lt;p&gt;Flux is a package management system that operates on a Git-based platform, delivering a range of smooth and incremental deployment solutions for Kubernetes. Its primary objective is to maintain synchronization between Kubernetes clusters and configuration sources such as Git repositories. It achieves this by automating the updating of configurations whenever new code is available for deployment. &lt;/p&gt;

&lt;h2&gt;
  
  
  Flux Components
&lt;/h2&gt;

&lt;p&gt;Flux is a powerful GitOps tool that is constructed using a variety of specialized components, collectively known as the &lt;strong&gt;GitOps Toolkit&lt;/strong&gt;. These components include Flux Controllers, &lt;a href="https://fluxcd.io/flux/components/" rel="noopener noreferrer"&gt;composable APIs&lt;/a&gt;, and &lt;a href="https://fluxcd.io/flux/components/" rel="noopener noreferrer"&gt;reusable Go packages&lt;/a&gt; that are designed to enhance the functionality of GitOps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flux Controllers in the GitOps Toolkit
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9psid3iot1odgxwechs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9psid3iot1odgxwechs.png" alt="GitOps Toolkit"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: &lt;a href="https://fluxcd.io/flux/components/" rel="noopener noreferrer"&gt;Flux Docs&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Source Controller:&lt;/strong&gt; It automates the process of obtaining files from external sources and integrating them into application code. This includes sources such as Git, OCI, Helm repositories, and S3-compatible buckets. The source controller makes it easier and faster for developers to develop and deploy applications without worrying about the details of accessing and integrating external resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Reflector Controller:&lt;/strong&gt; It monitors container registries for changes and updates the Kubernetes cluster with the latest images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Automation Controller:&lt;/strong&gt; It automates the process of updating the container image in the repository and commits changes to the repository automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kustomize Controller:&lt;/strong&gt; It applies Kubernetes manifests generated by Kustomize to a cluster and ensures the desired state of the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helm Controller:&lt;/strong&gt; It automates the deployment of Helm charts to a cluster and ensures that the desired state of the cluster is maintained.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Notification Controller:&lt;/strong&gt; It sends notifications to users and teams about changes made to the Kubernetes cluster, such as deployments and configuration updates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benefits of Flux
&lt;/h2&gt;

&lt;p&gt;In this section, we will look at some of the benefits of Flux which include:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Automation
&lt;/h3&gt;

&lt;p&gt;Flux automates the deployment process by continuously monitoring and updating Kubernetes resources in real time. This automation reduces the likelihood of human error and improves reliability by ensuring that the correct resources are deployed to the correct environments consistently.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Scalability
&lt;/h3&gt;

&lt;p&gt;Flux is designed to work well in large-scale Kubernetes environments. It can handle multiple clusters, applications, and even entire infrastructures at once, which makes it a great tool for managing complex deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Rollback
&lt;/h3&gt;

&lt;p&gt;Flux provides easy rollbacks in case of deployment failures or errors. It does this by using a Git-based version control system, which allows developers to roll back to a previous working version of the deployment. This feature reduces downtime and improves service availability, ensuring that the application remains operational even in the event of a deployment failure.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Integration
&lt;/h3&gt;

&lt;p&gt;Flux CD can be integrated with a range of tools, including monitoring, logging, and testing frameworks. This integration allows developers to manage the entire deployment process from a single platform, which improves collaboration and streamlines the deployment process.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. GitOps
&lt;/h3&gt;

&lt;p&gt;Flux uses the GitOps methodology, which means that all changes to the deployment process are stored in version control. This provides greater transparency, auditability, and reproducibility of the deployment process, allowing developers to easily track changes and roll back to previous versions if necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Customization
&lt;/h3&gt;

&lt;p&gt;Flux is highly customizable and can be tailored to fit specific deployment requirements. This customization allows developers to automate their deployment process in a way that meets their unique needs, improving efficiency and reducing errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GitOps and Flux represent a powerful approach to software development and deployment that can help teams achieve greater speed, reliability, and scalability.&lt;/p&gt;

&lt;p&gt;By using Git as the single source of truth, GitOps enables teams to manage updates across multiple environments and platforms, while Flux provides automation tools for managing Kubernetes clusters. This approach facilitates collaboration, continuous delivery, and scalability, making it well-suited for modern cloud-native environments. &lt;/p&gt;

&lt;p&gt;Ultimately, the adoption of GitOps and Flux is expected to lead to faster, more reliable, and more efficient software development and deployment.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploy your Application to Azure Kubernetes Service</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Sun, 21 May 2023 06:32:26 +0000</pubDate>
      <link>https://forem.com/danielfavour/deploy-your-application-to-azure-kubernetes-service-1n9d</link>
      <guid>https://forem.com/danielfavour/deploy-your-application-to-azure-kubernetes-service-1n9d</guid>
      <description>&lt;p&gt;Cloud computing has become a critical component of application development and deployment in today's fast-paced digital world. Azure, a major cloud platform, provides a comprehensive suite of tools and services to enable easy deployment, management, and scaling of applications. However, deploying applications to the cloud can be a complex and overwhelming task without appropriate guidance and the right set of tools and services.&lt;/p&gt;

&lt;p&gt;This article guides you through deploying your application to Azure, from containerization to monitoring and scaling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, ensure that you have the following in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An active &lt;a href="https://azure.microsoft.com/en-gb/pricing/" rel="noopener noreferrer"&gt;Azure Subscription&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;Basic knowledge of Azure&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download" rel="noopener noreferrer"&gt;Node&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; Installed&lt;/li&gt;
&lt;li&gt;An IDE, &lt;a href="https://code.visualstudio.com/Download" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt; recommended&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Application Overview
&lt;/h2&gt;

&lt;p&gt;The application utilized for this project is developed using Node.js and Express.js. It serves as a feedback app, allowing users to provide their feedback, which is then stored in the designated feedback directory within the application.&lt;br&gt;
It provides an interface or endpoint through which users can submit their feedback input. Once submitted, the application stores the feedback in the specified &lt;code&gt;feedback&lt;/code&gt; directory, ensuring proper organization and retention of user input.&lt;/p&gt;
&lt;h3&gt;
  
  
  GitHub URL
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/FavourDaniel/SCA-Project" rel="noopener noreferrer"&gt;https://github.com/FavourDaniel/SCA-Project&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To clone this repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;git&lt;/span&gt; &lt;span class="nx"&gt;clone&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/FavourDaniel/SCA-Project&lt;/span&gt;
&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="nx"&gt;SCA&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;Project&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;Below is a high level overview of what we will be building. There will be some other configurations in place like Virtual Networks, Load Balancers, etc, but this architecture diagram should give you a clear picture of what the infrastructure will look like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyvpsapncit959czbpuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyvpsapncit959czbpuc.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the application on your local
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;
&lt;span class="nx"&gt;node&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;js&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your browser and navigate to &lt;code&gt;localhost:80&lt;/code&gt; to view the running application.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg275h5v9cx9dt0uef7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg275h5v9cx9dt0uef7j.png" alt="Application"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Login to your Azure Account
&lt;/h2&gt;

&lt;p&gt;To set up your infrastructure on the Azure cloud platform, a connection between your terminal and your Azure account needs to be established. To accomplish this using the Azure CLI, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;login&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be redirected to sign in to your account. After successful authentication, your terminal should display an output similar to the one below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "56d87bs9-fr5n-8zzq-qq01-419l20234j0f",
    "id": "5h12gg64-60d2-1b4h-6j7c-c810419k33v2",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Visual Studio Enterprise Subscription",
    "state": "Enabled",
    "tenantId": "56d87bs9-fr5n-8zzq-qq01-419l20234j0f",
    "user": {
      "name": "username@gmail.com",
      "type": "user"
    }
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the &lt;code&gt;id&lt;/code&gt; value, it is your subscription ID. Next, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;subscriptionId&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt; &lt;span class="nx"&gt;subscription&lt;/span&gt; &lt;span class="nx"&gt;here&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nx"&gt;$subscription&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;subscriptionId&amp;gt;&lt;/code&gt; with the &lt;code&gt;Id&lt;/code&gt; value you copied and the connection should be established.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Infrastructure
&lt;/h2&gt;

&lt;p&gt;The infrastructure will be set up on Azure using Terraform. From the GitHub link provided, you need to change directory into &lt;code&gt;terraform-infrastructure&lt;/code&gt; directory. To do this, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="nx"&gt;terraform&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;infrastructure&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;fmt&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;validate&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;plan&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feztc7s1coz2bm05s0izs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feztc7s1coz2bm05s0izs.png" alt="terraform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I suggest running these commands individually. Terraform will proceed to create the infrastructure specified in the &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; You might encounter errors if any of the names used in the &lt;code&gt;vars.tf&lt;/code&gt; file is already in use on Azure by someone else. By names here, I mean &lt;code&gt;default&lt;/code&gt; which are the default names assigned to a resource and can be found in the &lt;code&gt;vars.tf&lt;/code&gt; file. In such case, assign a different name to the infrastructure that failed to create.&lt;/p&gt;

&lt;p&gt;To confirm that your resources have been created successfully, you can check your Azure portal and you should see the following resources created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Resource Group&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5p2vlos0q635owz2qoqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5p2vlos0q635owz2qoqn.png" alt="Resource Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Azure Container Registry&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fqzkf9l3minv6gxof69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fqzkf9l3minv6gxof69.png" alt="ACR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Azure Kubernetes Service&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2fehszirbeu1ntjg2hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2fehszirbeu1ntjg2hf.png" alt="AKS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage Accounts&lt;/strong&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo8yntfs27q6agpwznvp.png" alt="Storage Account"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These resources were specified in the Terraform configuration file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Azure Kubernetes Service being a managed service will by default create other services on its own when created which include Virtual Networks, new Resource Groups, Load Balancer, Network Security Group, Storage Class, Route Table, etc. These extra services do not need to be created by you, that is why it is a managed service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and Push the Docker Image
&lt;/h2&gt;

&lt;p&gt;In the root of the the project directory, there is &lt;code&gt;Dockerfile&lt;/code&gt; file which serves as a template for creating the Docker image. This Docker Image will not be created manually as a job for this has been setup in the GitHub Actions workflow which can also be found at the root directory.&lt;/p&gt;

&lt;p&gt;GitHub Actions is a CI/CD tool that automates software development workflows, including building, testing, and deploying code, directly from your GitHub repository. &lt;/p&gt;

&lt;p&gt;To build the Docker Image, create your own repository on GitHub and push the project there. Because there is a workflow that has already been set up but not configured properly, the job will fail.&lt;/p&gt;

&lt;p&gt;The image will be built but will not be pushed. This is because in the workflow, the below was specified:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Set&lt;/span&gt; &lt;span class="nx"&gt;up&lt;/span&gt; &lt;span class="nx"&gt;Docker&lt;/span&gt; &lt;span class="nx"&gt;Buildx&lt;/span&gt;
  &lt;span class="nx"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;setup&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;buildx&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This first step uses the &lt;code&gt;docker/setup-buildx-action@v2&lt;/code&gt; action to creates and boot a builder using the docker-container driver.&lt;/p&gt;

&lt;p&gt;After it successfully does this, it proceeds to login to your Azure Container Registry which was previously created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Login&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;Azure&lt;/span&gt; &lt;span class="nx"&gt;Container&lt;/span&gt; &lt;span class="nx"&gt;Registry&lt;/span&gt;
  &lt;span class="nx"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Azure&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;login&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v1&lt;/span&gt;
  &lt;span class="kd"&gt;with&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;login&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="nx"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ACR_REGISTRY_NAME&lt;/span&gt; &lt;span class="p"&gt;}}.&lt;/span&gt;&lt;span class="nx"&gt;azurecr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt;
    &lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="nx"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ACR_USERNAME&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
    &lt;span class="nl"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="nx"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ACR_PASSWORD&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the above, &lt;code&gt;Azure/docker-login@v1&lt;/code&gt; is an action that logs in to an Azure Container Registry. Because the login server, username and password of your Container Registry has not been provided yet, it is expected to fail.&lt;/p&gt;

&lt;p&gt;Same as the next step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Build&lt;/span&gt; &lt;span class="nx"&gt;Docker&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;
  &lt;span class="nx"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v4&lt;/span&gt;
  &lt;span class="kd"&gt;with&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="nx"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ACR_REGISTRY_NAME&lt;/span&gt; &lt;span class="p"&gt;}}.&lt;/span&gt;&lt;span class="nx"&gt;azurecr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;myrepository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="nx"&gt;github&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the above, &lt;code&gt;docker/build-push-action@v4&lt;/code&gt; is an action that builds and pushes Docker images using &lt;code&gt;Docker Buildx&lt;/code&gt;. If it manages to build the image, it will fail to push it because it doesn't have the credentials that grants it access to our Azure Container Registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Grant the Workflow access
&lt;/h3&gt;

&lt;p&gt;To grant the workflow access to your Container Registry, head over to your Azure portal. Under Container Registries, select your Container registry and from the left panel, select &lt;code&gt;Access keys&lt;/code&gt; and enable &lt;code&gt;Admin user&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l4xo9dfyybqr1boilcc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l4xo9dfyybqr1boilcc.png" alt="Access Keys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will generate passwords for you which you can always regenerate later for security reasons. &lt;/p&gt;

&lt;p&gt;Copy the &lt;code&gt;Login Server&lt;/code&gt;, &lt;code&gt;Username&lt;/code&gt; and one of the &lt;code&gt;password&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Head back to your GitHub repository and select &lt;code&gt;Settings&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupvtytfl8avytf02961.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupvtytfl8avytf02961.png" alt="Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and select &lt;code&gt;Actions&lt;/code&gt; under &lt;code&gt;Secrets and variables&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq8vo36in6jt0fx5ustg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq8vo36in6jt0fx5ustg.png" alt="Actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;code&gt;New repository secret&lt;/code&gt; button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fximgmoqkteioezf8d0hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fximgmoqkteioezf8d0hb.png" alt="Repository secret"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the first secret &lt;code&gt;ACR_REGISTRY_NAME&lt;/code&gt;. This is your login server which was copied previously. Paste the login server into the secret box and click on &lt;code&gt;Add secret&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Do the same for &lt;code&gt;ACR_USERNAME&lt;/code&gt; and &lt;code&gt;ACR_PASSWORD&lt;/code&gt; replacing them with the username and password that was copied. Afterwards, you should have this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki6059ana7xxlgxxfvp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki6059ana7xxlgxxfvp7.png" alt="Image secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the workflow has access to your Container Registry, re-run the workflow and it should pass.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz80yv6tcba9dbwhdpaal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz80yv6tcba9dbwhdpaal.png" alt="rebuilt pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check you container registry under &lt;code&gt;Repositories&lt;/code&gt; and you should see your image there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88090cu47k112mdb4dd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88090cu47k112mdb4dd3.png" alt="Repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the Kubernetes CLI
&lt;/h2&gt;

&lt;p&gt;To manage your AKS cluster using the Kubernetes command-line interface (CLI), you will need to install the &lt;code&gt;kubectl&lt;/code&gt; tool on your local machine.&lt;br&gt;
You can do this by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;aks&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;cli&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will install the kubectl tool on your machine, allowing you to interact with your AKS cluster using the Kubernetes API. Once the installation is complete, you can verify that kubectl is installed by running the &lt;code&gt;kubectl version&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to the AKS cluster using kubectl
&lt;/h2&gt;

&lt;p&gt;To connect to your AKS cluster using kubectl, you need to configure your local kubectl client with the credentials for your cluster. You can do this by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;aks&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;DemoRG&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;myAKSCluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Resource Group name &lt;code&gt;DemoRG&lt;/code&gt; and cluster name &lt;code&gt;myAKSCluster&lt;/code&gt; are defined in the &lt;code&gt;vars.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The above command retrieves the Kubernetes configuration from the AKS cluster and merges it with your local Kubernetes configuration file. This makes it possible for you to use kubectl to interact with the AKS cluster.&lt;/p&gt;

&lt;p&gt;After running the command, you should see output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;Merged&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;myAKSCluster&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="sr"&gt;/home/&lt;/span&gt;&lt;span class="nx"&gt;daniel&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kube&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that the Kubernetes context named &lt;code&gt;myAKSCluster&lt;/code&gt; has been merged into the kubeconfig file located at &lt;code&gt;/home/daniel/.kube/config&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This kubeconfig file is used to store cluster access information, such as cluster, namespace, and user details, for the kubectl command-line tool kubernetes.io.&lt;/p&gt;

&lt;p&gt;In this case, the &lt;code&gt;myAKSCluster&lt;/code&gt; &lt;a href="https://www.decodingdevops.com/what-is-kubernetes-context-and-kubernetes-context-tutorial/" rel="noopener noreferrer"&gt;context&lt;/a&gt; has been set as the current context, which means that subsequent kubectl commands will use the configuration from this context to communicate with the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;To confirm that your kubectl client is properly configured to connect to your AKS cluster, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;nodes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will return a list of the nodes in your AKS cluster, which should include the node that was created at cluster creation. In the Terraform configuration, only one (1) node was specified to be created so you should see only one.&lt;br&gt;
The output should look similar to the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                  STATUS   ROLES   AGE     VERSION
aks-aksnodepool-16067207-vmss000000   Ready    agent   2d13h   v1.25.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy your Application to the Cluster
&lt;/h2&gt;

&lt;p&gt;To deploy our application to the AKS cluster, we need to create a deployment, a service, a persistent volume claim, and a storage class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;Deployment:&lt;/code&gt;&lt;/strong&gt; Manages a set of &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;replicas&lt;/a&gt; of your application's containers and handles scaling, rolling updates, and rollbacks.&lt;br&gt;
&lt;strong&gt;&lt;code&gt;Service:&lt;/code&gt;&lt;/strong&gt; Provides a stable IP address and DNS name for your application within the cluster.&lt;br&gt;
&lt;strong&gt;&lt;code&gt;Persistent Volume Claim:&lt;/code&gt;&lt;/strong&gt; Requests a specific amount of storage for your application's data.&lt;br&gt;
&lt;strong&gt;&lt;code&gt;Storage Class:&lt;/code&gt;&lt;/strong&gt; Defines the type of storage that will be provisioned for the claim. In this case, we will be using Azure File Storage as our storage class.&lt;/p&gt;

&lt;p&gt;To create our resources, we need to first write the manifest files for them. We will start creating the Storage Class first. This is because the Persistent Volume Claim relies on the Storage Class for &lt;a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noopener noreferrer"&gt;dynamic provisioning&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Storage Class
&lt;/h3&gt;

&lt;p&gt;To create a manifest for the storage class, run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;storageclass&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, paste the below configuration into it and save:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;k8s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;v1&lt;/span&gt;
&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;StorageClass&lt;/span&gt;
&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;azure&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;
&lt;span class="nx"&gt;provisioner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;kubernetes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;azure&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;
&lt;span class="nx"&gt;mountOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;dir_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0777&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;file_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0777&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;uid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;gid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;mfsymlinks&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;nobrl&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;none&lt;/span&gt;
&lt;span class="nx"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;skuName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Standard_LRS&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eastus&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we defined the Azure File Storage to be our storage provisioner.&lt;/p&gt;

&lt;p&gt;To create the storage class, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="nx"&gt;storageclass&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the deployed storage class, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;storageclass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show you a list of all the available storage classes, including the one you just created.&lt;/p&gt;

&lt;p&gt;You should see an output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azure-file-storage      kubernetes.io/azure-file   Delete          Immediate              false                  2d12h
azurefile               file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-csi           file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-csi-premium   file.csi.azure.com         Delete          Immediate              true                   2d14h
azurefile-premium       file.csi.azure.com         Delete          Immediate              true                   2d14h
default (default)       disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed                 disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-csi             disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-csi-premium     disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
managed-premium         disk.csi.azure.com         Delete          WaitForFirstConsumer   true                   2d14h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the above output, we can see both Azure file storage and Azure disk storage provisioned.&lt;/p&gt;

&lt;p&gt;When the Azure Kubernetes Service was created, it created the Disk storage. We are creating this File storage because we want to use a File storage rather than a Disk storage. If you decide to use the Disk storage, creating this storage class will not be necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Volume Claim
&lt;/h3&gt;

&lt;p&gt;The Persistent Volume Claim is the next resource we will create. This is because when we create our deployment, it will look for the persistent volume claim we specified and if it cannot locate it, it will fail.&lt;/p&gt;

&lt;p&gt;To create the manifest for the PVC, run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;pvc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the following configuration into it and save.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;v1&lt;/span&gt;
&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;pvc&lt;/span&gt;
&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;accessModes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="nx"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="nx"&gt;Gi&lt;/span&gt;
  &lt;span class="nx"&gt;storageClassName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;azure&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See &lt;a href="https://dev.tourl/"&gt;Persistent Volume Claim&lt;/a&gt; for manifest breakdown.&lt;/p&gt;

&lt;p&gt;To create the PVC, run the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="nx"&gt;pvc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check the created PVC, run the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
scademo-pvc   Bound    pvc-20a3d37b-f734-4f53-ba96-d94e63463623   20Gi       RWO            azure-file-storage   2d12h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the PVC has been bound to the azure file storage class created.&lt;/p&gt;

&lt;p&gt;If you check your Storage Account on your azure portal, you should see the file storage that was created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftl00bu33r077b7y29pdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftl00bu33r077b7y29pdz.png" alt="pvc"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;Now that the storage class and PVC have been created, we can create the deployment.&lt;/p&gt;

&lt;p&gt;To create a manifest for the deployment, run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;deployment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the below configuration into the file and save.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;apps&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;v1&lt;/span&gt;
&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Deployment&lt;/span&gt;
&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="nx"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
    &lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;nodeSelector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kubernetes.io/os&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;
      &lt;span class="nx"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scacontainerregistry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurecr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;io&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;scademo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;v1&lt;/span&gt;
        &lt;span class="nx"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
          &lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nx"&gt;cpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt; 
            &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="nx"&gt;Mi&lt;/span&gt;
          &lt;span class="nx"&gt;limits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nx"&gt;cpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;
            &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="nx"&gt;Mi&lt;/span&gt;
        &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt; 
        &lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TEST&lt;/span&gt;
          &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;scademo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;volumeMounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;mountPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;
          &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;pvc&lt;/span&gt;
      &lt;span class="nx"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;pvc&lt;/span&gt;
        &lt;span class="nx"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
          &lt;span class="nx"&gt;claimName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Refer to &lt;a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests" rel="noopener noreferrer"&gt;Deployments and YAML&lt;/a&gt;, to see a breakdown of the file configuration.&lt;/p&gt;

&lt;p&gt;To create this deployment, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="nx"&gt;deployment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check your deployment, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;deployments&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME      READY   UP-TO-DATE   AVAILABLE   AGE
scademo   1/1     1            1           44h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Deployments automatically create and manage pods for you, eliminating the need for manual pod creation. Whenever you create a deployment, it creates a pod along with it as well as a Replica Set.&lt;/p&gt;

&lt;p&gt;To see the deployed pod, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;pods&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                       READY   STATUS    RESTARTS   AGE
scademo-7885bbb755-q464b   1/1     Running   0          44h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the Replica Set, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;rs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 DESIRED   CURRENT   READY   AGE
scademo-7885bbb755   1         1         1       44h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Service
&lt;/h3&gt;

&lt;p&gt;Now we need to expose this deployment so that we can access it from outside the cluster. To do this, we will create a service.&lt;/p&gt;

&lt;p&gt;To create a manifest for the service, run the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;touch&lt;/span&gt; &lt;span class="nx"&gt;svc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the below configuration into it and save.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;v1&lt;/span&gt;
&lt;span class="nx"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Service&lt;/span&gt;
&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
  &lt;span class="nx"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above configuration, we are utilizing a LoadBalancer to access the deployment. A LoadBalancer is a type of service that creates an external load balancer in a cloud environment and assigns a static external IP address to the service. It is commonly used to distribute incoming traffic across worker nodes and enables access to a service from outside the cluster.&lt;/p&gt;

&lt;p&gt;To create the service, run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="nx"&gt;svc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the created service, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;svc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will return an output similar to the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME         TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.0.0.1      &amp;lt;none&amp;gt;         443/TCP        2d14h
scademo      LoadBalancer   10.0.44.214   40.88.195.57   80:32591/TCP   44h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service will show &lt;code&gt;pending&lt;/code&gt; at creation, after some seconds, the external IP address will be allocated.&lt;/p&gt;

&lt;p&gt;Copy and paste the external IP on your web browser and you should see your application running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6d44lcj8iije5bbnw9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6d44lcj8iije5bbnw9g.png" alt="Running application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Monitoring for your Cluster
&lt;/h2&gt;

&lt;p&gt;To effectively monitor resources in your Kubernetes cluster, it's recommended to set up &lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-overview" rel="noopener noreferrer"&gt;Container Insights&lt;/a&gt; in Azure, which provides comprehensive monitoring of the entire cluster. Container Insights can be integrated with a Log Analytics Workspace to enable additional visibility and monitoring capabilities for containerized applications running in a Kubernetes environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log Analytics Workspace
&lt;/h3&gt;

&lt;p&gt;This is a centralized storage and management location for log data collected by Azure Monitor, including data from your AKS clusters. It allows you to query, analyze, and visualize the collected data to obtain insights and troubleshoot issues.&lt;/p&gt;

&lt;p&gt;To enable monitoring for your AKS cluster, you need to specify a Log Analytics Workspace where the collected monitoring data will be stored if you have an existing Workspace.&lt;/p&gt;

&lt;p&gt;You can create a Log Analytics Workspace using a default workspace for the resource group if you do not have an existing Workspace. This default Workspace will be created automatically with a name in the format &lt;code&gt;DefaultWorkspace-&amp;lt;GUID&amp;gt;-&amp;lt;Region&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To create this default workspace, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;aks&lt;/span&gt; &lt;span class="nx"&gt;enable&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;addons&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;monitoring&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt; &lt;span class="nx"&gt;myAKSCluster&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt; &lt;span class="nx"&gt;DemoRG&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A monitoring addon is a component that enables you to monitor the performance and health of your AKS cluster using Azure Monitor for containers. When you enable the monitoring addon, it deploys a containerized version of the Log Analytics agent on each node in your AKS allowing Azure Monitor for containers to monitor the performance and health of the AKS cluster.&lt;/p&gt;

&lt;p&gt;This Log Analytics agent is a component running on each node of your Kubernetes cluster, responsible for collecting logs and metrics from the node and the applications running on it. The agent is deployed as a DaemonSet in the cluster, ensuring an instance of the agent runs on every node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; The command requires the registration of the &lt;code&gt;Microsoft.OperationsManagement&lt;/code&gt; resource provider with your subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/shows/azure/operations-management-suite-oms" rel="noopener noreferrer"&gt;Microsoft.OperationsManagement&lt;/a&gt; stands for to Microsoft Operations Management Suite (OMS). OMS is a cloud-based service that provides monitoring and management capabilities across Azure.&lt;/p&gt;

&lt;p&gt;If this service is not registered with your subscription, you will receive the following error output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(AddContainerInsightsSolutionError) Code="MissingSubscriptionRegistration" Message="The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions." Details=[{"code":"MissingSubscriptionRegistration","message":"The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions.","target":"Microsoft.OperationsManagement"}]
Code: AddContainerInsightsSolutionError
Message: Code="MissingSubscriptionRegistration" Message="The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions." Details=[{"code":"MissingSubscriptionRegistration","message":"The subscription is not registered to use namespace 'Microsoft.OperationsManagement'. See https://aka.ms/rps-not-found for how to register subscriptions.","target":"Microsoft.OperationsManagement"}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check the registration status using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="nx"&gt;show&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt; &lt;span class="nx"&gt;Microsoft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OperationsManagement&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="nx"&gt;show&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt; &lt;span class="nx"&gt;Microsoft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OperationalInsights&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To register if not registered, execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="nx"&gt;register&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="nx"&gt;Microsoft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OperationsManagement&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="nx"&gt;register&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="nx"&gt;Microsoft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OperationalInsights&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check your Azure portal under Log Analytics Workspace to see the created Workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuahp222sel9velldde7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuahp222sel9velldde7h.png" alt="LAW"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Agent Deployment
&lt;/h3&gt;

&lt;p&gt;To verify that monitoring agent was successfully deployed into the cluster, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;ds&lt;/span&gt; &lt;span class="nx"&gt;ama&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;logs&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;kube&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command retrieves information about a DaemonSet named &lt;code&gt;ama-logs&lt;/code&gt; in the &lt;code&gt;kube-system&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ama&lt;/code&gt; stands for Azure Monitor Agent.&lt;/p&gt;

&lt;p&gt;The command should return the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ama-logs   1         1         1       1            1           &amp;lt;none&amp;gt;          35h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have only one (1) running because we have a single node. If we had 2 nodes, we will have two (2) agents.&lt;/p&gt;

&lt;p&gt;Head back to your Azure portal. Under Azure monitor, select &lt;code&gt;view&lt;/code&gt; under Container Insights or select &lt;code&gt;Containers&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm2tpc57aci3mq5h2ela.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm2tpc57aci3mq5h2ela.png" alt="Container Insight"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;code&gt;Monitored Clusters&lt;/code&gt; you should see your Cluster being monitored.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzq7fv2ge683iuruhl3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzq7fv2ge683iuruhl3x.png" alt="Monitored Cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your Cluster and you should see Insights of your Cluster. You can change it to show insights of nodes, controllers, containers and also get a status report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frruwrudo8ojgpflxltia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frruwrudo8ojgpflxltia.png" alt="Cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also choose to see the metrics as well other settings and configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxqkm0d02s8o4cyjwh08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxqkm0d02s8o4cyjwh08.png" alt="Metrics"&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scaling the deployment
&lt;/h3&gt;

&lt;p&gt;To scale our Kubernetes deployment and ensure it can handle increased traffic, we need to edit our deployment configuration.&lt;/p&gt;

&lt;p&gt;Initially, we specified our deployment to have only one replica, which is why it created a single pod. To scale the deployment, we can edit the &lt;code&gt;deployment.yml&lt;/code&gt; file and change the replica value from 1 to 4 (or any desired number) to create four replicas of the application.&lt;/p&gt;

&lt;p&gt;In the spec section, change the number of replicas to four (4)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;
  &lt;span class="nx"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;scademo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making the change, save the file and apply the updated configuration by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="nx"&gt;deployment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying the changes to the deployment file, give it some time to take effect. This will update the deployment with the new configuration and create the additional replicas. To confirm that the replicas have been created, we can run the following command to get the status of the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;deployment&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should show that the scademo deployment now has 4 replicas, as indicated by the READY column.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;NAME&lt;/span&gt;      &lt;span class="nx"&gt;READY&lt;/span&gt;   &lt;span class="nx"&gt;UP&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;TO&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;DATE&lt;/span&gt;   &lt;span class="nx"&gt;AVAILABLE&lt;/span&gt;   &lt;span class="nx"&gt;AGE&lt;/span&gt;
&lt;span class="nx"&gt;scademo&lt;/span&gt;   &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;     &lt;span class="mi"&gt;4&lt;/span&gt;            &lt;span class="mi"&gt;4&lt;/span&gt;           &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="nx"&gt;h&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Previously, there was only one replica of the deployment, so the output showed 1/1. After increasing the replica count to 4, the output now shows 4/4, indicating that there are four replicas of the deployment running and all of them are available.&lt;/p&gt;

&lt;p&gt;You can also verify that there are 4 running pods by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;pods&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                       READY   STATUS    RESTARTS   AGE
scademo-7885bbb755-644r9   1/1     Running   0          13h
scademo-7885bbb755-q464b   1/1     Running   0          35h
scademo-7885bbb755-gjnjp   1/1     Running   0          13h
scademo-7885bbb755-kvgg5   1/1     Running   0          13h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, since deployments are used to manage replica sets, you can check the replica sets by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubectl&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;replicaset&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the below output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 DESIRED   CURRENT   READY   AGE
scademo-7885bbb755   4         4         4       35h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output shows that the &lt;code&gt;scademo-7885bbb755&lt;/code&gt; replica set has 4 replicas that are all ready and available.&lt;/p&gt;

&lt;p&gt;That is how you scale your application in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we explored the process of deploying an application to Azure and the various tools and services that can make the process seamless and efficient. By leveraging the right knowledge and resources, you can unlock the full potential of the Azure platform and take advantage of its vast array of features and capabilities.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>Connecting an App Service to Azure SQL Database and Storage Account using Azure CLI</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Wed, 03 May 2023 21:39:41 +0000</pubDate>
      <link>https://forem.com/danielfavour/connecting-an-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-1on6</link>
      <guid>https://forem.com/danielfavour/connecting-an-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-1on6</guid>
      <description>&lt;p&gt;Effective data management is crucial for optimal application performance, and SQL databases and storage accounts play a critical role in streamlining data storage and access. &lt;/p&gt;

&lt;p&gt;SQL databases provide a reliable and scalable method of storing and managing data in a structured manner, making it easier to search, retrieve, and analyze, while storage accounts offer a secure and scalable platform for storing and accessing large amounts of unstructured data. By combining these technologies, developers can create a robust data management strategy that meets their app's unique needs.&lt;/p&gt;

&lt;p&gt;In this article, we will explore how to connect an App Service to an Azure SQL database and Storage Account using Azure CLI, a command-line tool that allows you to manage and interact with Azure resources using a shell or command-line interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, ensure that you have the following in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An active Azure Subscription&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;Basic knowledge of Azure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connect to our Azure Account
&lt;/h2&gt;

&lt;p&gt;To connect to our Azure account, we can use the Azure CLI to sign in by running the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;subscription&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;subscriptionId&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt; &lt;span class="nx"&gt;subscription&lt;/span&gt; &lt;span class="nx"&gt;here&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nx"&gt;$subscription&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;az login&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run the below command to obtain your subscription ID if you are unaware of it &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;login&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Upon executing the command, a prompt will appear requesting login credentials for your Azure account. After successful authentication, your terminal should display an output similar to the one below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cloudName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AzureCloud&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;homeTenantId&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;56d87bs9-fr5n-8zzq-qq01-419l20234j0f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;5h12gg64-60d2-1b4h-6j7c-c810419k33v2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;isDefault&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;managedByTenants&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Visual Studio Enterprise Subscription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;state&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Enabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tenantId&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;56d87bs9-fr5n-8zzq-qq01-419l20234j0f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;username@gmail.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;"id"&lt;/code&gt; value is your subscription ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Resource Group
&lt;/h2&gt;

&lt;p&gt;In Azure, all resources created are tied to a Resource Group. A Resource Group is a logical container that holds related Azure resources, such as virtual machines, storage accounts, and virtual networks. &lt;/p&gt;

&lt;p&gt;To create a Resource Group, run the below script&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;east us&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;resourceGroup&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DemoRG&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;connect-to-sql&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $resourceGroup in &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="nx"&gt;$location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt; &lt;span class="nx"&gt;$tag&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Environment variables are dynamic values that can affect the behavior of programs and scripts. They can be referenced by using their name in dollar signs. In the Azure CLI, exporting variables allows us to reference them using a dollar sign instead of giving the name directly in the command. Exported variables are available to all processes launched from the current shell session, but need to be re-exported if the shell or terminal is changed.&lt;/p&gt;

&lt;p&gt;In the above script, we exported the environment variables (location, resourceGroup, and tags) which are used in creating the resource group. Because these variables have been exported into the system, we can reference them when needed.&lt;/p&gt;

&lt;p&gt;You can choose a different Resource group name, a different location, and tag as well.&lt;/p&gt;

&lt;p&gt;After running the previous script, you should get the below output&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;Creating&lt;/span&gt; &lt;span class="nx"&gt;DemoRG&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;east&lt;/span&gt; &lt;span class="nx"&gt;us&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/subscriptions/5h12gg64-60d2-1b4h-6j7c-c810419k33v2/resourceGroups/DemoRG&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eastus&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;managedBy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DemoRG&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;properties&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;provisioningState&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Succeeded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tags&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Microsoft.Resources/resourceGroups&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This shows that the Resource Group was successfully created. You can still check directly in your Azure portal to be sure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qml5b03vbsk0fa36s4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qml5b03vbsk0fa36s4c.png" alt="Resource Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an App Service Plan
&lt;/h2&gt;

&lt;p&gt;In Azure, an App Service Plan is a logical container for hosting Azure App Service apps. It defines the underlying virtual machine instance and resources that are required to host the apps. You might confuse this for Azure App Service itself.&lt;/p&gt;

&lt;p&gt;Azure App Service is a Platform as a Service (PaaS) offering in Microsoft Azure that enables developers to quickly build, deploy, and manage web apps, mobile app backends, and RESTful APIs. It provides a fully managed platform for hosting and scaling web applications without the need to manage the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;An Azure App Service Plan, on the other hand, is a logical container for hosting one or more Azure App Service apps. It provides the necessary infrastructure resources, such as virtual machines, memory, and storage, to run and scale the apps. The resources allocated to an App Service Plan determine the capacity and performance of the apps hosted in that plan.&lt;/p&gt;

&lt;p&gt;To create an App Service plan, run the below command;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;appServicePlan&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;demo_app_service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $appServicePlan&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;appservice&lt;/span&gt; &lt;span class="nx"&gt;plan&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$appServicePlan&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From the above, we exported an &lt;code&gt;appServicePlan&lt;/code&gt; variable which the command will reference to create the App Service Plan. You can check your Azure portal to see the Azure app service plan created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcu6o5sg9o57atnmj98h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcu6o5sg9o57atnmj98h.png" alt="App Service Plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Web App
&lt;/h2&gt;

&lt;p&gt;Recall that an App Service Plan is used to host one or more App Services while an App Service itself is used to deploy and manage web apps. &lt;/p&gt;

&lt;p&gt;When you create a Web App from the Azure CLI, it is automatically associated with an App Service. This is because a Web App needs a platform to run on, and an App Service provides that platform. It is created within an App Service as a child resource. &lt;/p&gt;

&lt;p&gt;An App Service Plan is needed in Azure to allocate and manage the computing resources required for hosting one or more App Services. It defines the underlying infrastructure resources needed to run and scale your Web Apps, such as CPU, memory, disk space, and network bandwidth. You must choose an App Service Plan when creating an App Service, which allows you to scale up or down the resources allocated to your App Service based on demand. An App Service Plan ensures that your Web Apps have the necessary resources to operate efficiently and can easily scale resources up or down to meet changing demands.&lt;/p&gt;

&lt;p&gt;To create a Web App, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;webapp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FavourWebApp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $webapp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;webapp&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$webapp&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;plan&lt;/span&gt; &lt;span class="nx"&gt;$appServicePlan&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Again, we exported a variable &lt;code&gt;webapp&lt;/code&gt; which the command referenced in creating the Web App. As you can see, previous exported variable didn’t need to be referenced again since it was done previously. The web app is then created under the same Resource Group and location as the App Service Plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; If you encounter an error message similar to the one below, it indicates that another user has already created or deployed a web app with that name. When you create a Web App, it will have a subdomain of &lt;strong&gt;azurewebsites.net&lt;/strong&gt;. If the name you have chosen is already being used, you will need to select a different name to proceed. For instance, when I used the name &lt;strong&gt;DemoApp&lt;/strong&gt;, it returned an error because the name was already in use.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;Creating&lt;/span&gt; &lt;span class="nx"&gt;DemoApp&lt;/span&gt;
&lt;span class="nx"&gt;Webapp&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DemoApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;already&lt;/span&gt; &lt;span class="nx"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;The&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;existing&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;s settings.
Unable to retrieve details of the existing app &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="nx"&gt;DemoApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;. Please check that the app is a part of the current subscription


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After successfully deploying the web app you can check it on the Azure Portal. It can be found under App Services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppe5ymrze83zsz5duxp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppe5ymrze83zsz5duxp9.png" alt="App Services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also load the url on your browser to see it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faix83436wds5ogczlj2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faix83436wds5ogczlj2w.png" alt="Web App"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an SQL Server
&lt;/h2&gt;

&lt;p&gt;An SQL Server is a software program that provides a platform for creating, managing, and accessing databases using a structured query language (SQL). It is a necessary component for managing databases because it provides the infrastructure to store, manage, and secure data. A database &lt;strong&gt;must&lt;/strong&gt; be attached to a SQL Server in order to access and manage the data it contains. Without a SQL Server, a database &lt;strong&gt;cannot&lt;/strong&gt; be accessed or managed.&lt;/p&gt;

&lt;p&gt;Before we create the SQL database, we need to create a SQL Server that the database will run on. &lt;/p&gt;

&lt;p&gt;To do this, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;favourdemo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;login&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;favour&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yourpassword@123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$server&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;admin&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="nx"&gt;$login&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;admin&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="nx"&gt;$password&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above configuration, you can choose a different server name to be exported, a login name, and a password of your choice.&lt;/p&gt;

&lt;p&gt;Similar to how a subdomain is created for a Web App, creating an SQL Server on Azure results in a subdomain of &lt;strong&gt;database.windows.net&lt;/strong&gt;. If the chosen name for the SQL Server is already taken, an error message will be displayed.&lt;/p&gt;

&lt;p&gt;The rules for choosing a password are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your password must be at least 8 characters in length.&lt;/li&gt;
&lt;li&gt;Your password must be no more than 128 characters in length.&lt;/li&gt;
&lt;li&gt;Your password must contain characters from three of the following categories – English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, etc.).&lt;/li&gt;
&lt;li&gt;Your password cannot contain all or part of the login name. Part of a login name is defined as three or more consecutive alphanumeric characters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; If you do not follow the password rules, you will get an error.&lt;/p&gt;

&lt;p&gt;You can check your Azure portal to see the created SQL Server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwadr8mspku55onv1kd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwadr8mspku55onv1kd4.png" alt="SQL Server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Access to the SQL Server
&lt;/h3&gt;

&lt;p&gt;After creating the SQL Server, we need to configure a firewall for access to control access to the server. This is important for security reasons as it allows only authorized users or applications to access the SQL Server. Without proper firewall configuration, the server may be vulnerable to unauthorized access and potential security breaches.&lt;/p&gt;

&lt;p&gt;To do this, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;startIp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.0.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;endIp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.0.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating firewall rule with starting ip of $startIp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;ending&lt;/span&gt; &lt;span class="nx"&gt;ip&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;$endIp&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;firewall&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;$server&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;AllowYourIp&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt; &lt;span class="nx"&gt;$startIp&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt; &lt;span class="nx"&gt;$endIp&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From the above configuration, we are exporting a start IP and an End IP which refer to the range of IP addresses that are allowed to connect to the server.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;start IP&lt;/strong&gt; is the first IP address in the range of allowed addresses, while the &lt;strong&gt;end IP&lt;/strong&gt; is the last IP address in the range. Any IP address outside this range will be blocked from accessing the SQL Server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Because both startIP and endIP are set to “0.0.0.0”, it means we are opening the firewall to allow access from anywhere. This is strictly for demo purposes. It is unsafe for production environments so always ensure to restrict access to your server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an Azure SQL Database
&lt;/h2&gt;

&lt;p&gt;An SQL database is a type of relational database management system (RDBMS) that stores data in a structured format using tables consisting of rows and columns. It uses SQL (Structured Query Language) to manage the data and allows for easy retrieval, insertion, and modification of data. When an App Service app is connected to a database, it typically stores structured data that can be queried and manipulated using SQL or other database management tools. Examples of this type of data include user information, product catalogs, and transaction logs.&lt;/p&gt;

&lt;p&gt;To create an Azure SQL database in Azure, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;demodatabase&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $database&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;$server&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$database&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;objective&lt;/span&gt; &lt;span class="nx"&gt;S0&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration export the database variable (name), and creates the database using the variable on the SQL Server we previously created under the same Resource Group.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--service-objective S0&lt;/code&gt; is a parameter used in the Azure CLI command to set the performance level of the Azure SQL Database when it is created or updated. In this case, &lt;code&gt;S0&lt;/code&gt; is the service objective, which represents the &lt;strong&gt;Standard&lt;/strong&gt; service tier of Azure SQL Database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuamfr59k7egdflajfy9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuamfr59k7egdflajfy9a.png" alt="SQL Database"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Connection String for the Database
&lt;/h3&gt;

&lt;p&gt;A connection string for a database is a text string that contains the information required to establish a connection to a database server. It typically includes the name and location of the database, the name of the server hosting the database, and authentication information such as a username and password. &lt;/p&gt;

&lt;p&gt;The connection string is &lt;strong&gt;required&lt;/strong&gt; by applications or programs to establish a connection to the database server.&lt;/p&gt;

&lt;p&gt;To get the connection string for the just created database, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;connstring&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;sql&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="nx"&gt;show&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$database&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;$server&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ado&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;net&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//ado.net) --output tsv&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;-client ado.net&lt;/code&gt;&lt;/strong&gt;: This specifies the client driver to use for the connection string. In this case, we are using the .NET client driver.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;-output tsv&lt;/code&gt;&lt;/strong&gt;: This specifies that the output of the command should be in TSV (tab-separated values) format, which is a type of text format that is easy to parse programmatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;connstring=$(...)&lt;/code&gt;&lt;/strong&gt;: This assigns the output of the command to a shell variable called &lt;strong&gt;&lt;code&gt;connstring&lt;/code&gt;&lt;/strong&gt;. The &lt;strong&gt;&lt;code&gt;$()&lt;/code&gt;&lt;/strong&gt; syntax allows you to execute a command and capture its output.&lt;/p&gt;

&lt;p&gt;The command &lt;strong&gt;&lt;code&gt;az sql db show-connection-string&lt;/code&gt;&lt;/strong&gt; is used to retrieve the connection string for the Azure SQL database that was just created. The output of this command is a string that contains information such as the server name, database name, login credentials, and other settings required to connect to the database as seen below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;Server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;tcp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;favourdemo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;windows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1433&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nx"&gt;Initial&lt;/span&gt; &lt;span class="nx"&gt;Catalog&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;demodatabase&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nx"&gt;Persist&lt;/span&gt; &lt;span class="nx"&gt;Security&lt;/span&gt; &lt;span class="nx"&gt;Info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;False&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt; &lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;username&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;;Password=&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;password&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;;MultipleActiveResultSets=False;Encrypt=true;TrustServerCertificate=False;Connection Timeout=30;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By assigning the output of the command to a variable named "&lt;strong&gt;connstring&lt;/strong&gt;" using the syntax &lt;strong&gt;&lt;code&gt;connstring=$(az sql db show-connection-string --name $database --server $server --client [ado.net](http://ado.net) --output tsv)&lt;/code&gt;&lt;/strong&gt;, the connection string can be referenced later in the script.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add your Credentials to Connstring
&lt;/h3&gt;

&lt;p&gt;After getting the connection string using the command &lt;strong&gt;&lt;code&gt;az sql db show-connection-string&lt;/code&gt;&lt;/strong&gt;, the resulting string will contain placeholders for the user ID and password, indicated by &lt;strong&gt;&lt;code&gt;&amp;lt;username&amp;gt;&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;&amp;lt;password&amp;gt;&lt;/code&gt;&lt;/strong&gt; respectively. We need to replace these placeholders with our actual login credentials to be able to connect to the database. Therefore, we need to add our credentials to the connection string to establish a successful connection to the Azure SQL Database.&lt;/p&gt;

&lt;p&gt;To do this, run the below;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;connstring&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;connstring&lt;/span&gt;&lt;span class="c1"&gt;//&amp;lt;username&amp;gt;/$login}&lt;/span&gt;
&lt;span class="nx"&gt;connstring&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;connstring&lt;/span&gt;&lt;span class="c1"&gt;//&amp;lt;password&amp;gt;/$password}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Assign the Connection String to an App Setting in the Web App
&lt;/h3&gt;

&lt;p&gt;Application settings are configuration values that are stored within the App Service, and can be accessed by the application code at runtime. These settings can include things like database connection strings, API keys, and other configuration values specific to your application.&lt;/p&gt;

&lt;p&gt;Assigning the connection string to an app setting in the web app makes it easy to manage and update the connection string without having to modify the code of the web app. By storing the connection string as an app setting, the web app can access it at runtime and establish a connection to the database without exposing sensitive information like usernames and passwords in the code. This also makes it easier to switch between different databases or servers, as the connection string can be updated in one central location (the app settings) rather than having to update it in multiple places in the code. &lt;/p&gt;

&lt;p&gt;To do this, run the below command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;webapp&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="nx"&gt;appsettings&lt;/span&gt; &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$webapp&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;settings&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SQLSRV_CONNSTR=$connstring&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create a storage account
&lt;/h2&gt;

&lt;p&gt;A storage account in Azure is a secure and scalable cloud-based storage solution that allows users to store and access various types of data such as files, blobs, queues, tables, and disks, among others. Connecting an App Service app to a storage account allows the app to access and store data in the storage account, which can be used to store unstructured data such as files, images, videos, and other data that the app may require.&lt;/p&gt;

&lt;p&gt;To create a storage account, run the below command;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;favourdemostore&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Creating $storage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$storage&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;sku&lt;/span&gt; &lt;span class="nx"&gt;Standard_LRS&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;--sku Standard_LRS is a parameter used in the Azure CLI command to set the pricing tier and replication scheme of the Azure Storage account being created. In this case, Standard_LRS represents the Standard pricing tier and the Locally Redundant Storage (LRS) replication scheme.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6j1q34aolmpszzox65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6j1q34aolmpszzox65.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; The storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieve the storage account connection string
&lt;/h3&gt;

&lt;p&gt;The connection string here is a string that contains all the necessary information to connect to a storage account. This includes details like the storage account name, access key, and endpoint. The connection string allows you to access and manage the data stored in the storage account.&lt;/p&gt;

&lt;p&gt;To connect an app service app to a storage account in Azure, we need to obtain the storage account connection string. This connection string is crucial because it enables the app to access and manipulate the data stored in the storage account, such as uploading and downloading files, accessing and modifying blobs, and more. Without the connection string, the app will not be able to communicate with the storage account and perform these actions.&lt;/p&gt;

&lt;p&gt;To retrieve the connection string, run the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;connstr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="nx"&gt;show&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$storage&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="nx"&gt;connectionString&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="nx"&gt;tsv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Assign the connection string to an App setting in the Web app
&lt;/h3&gt;

&lt;p&gt;Now, we will set the connection string as a value for an App Setting in the configuration of a Web app deployed on Azure. This allows the Web app to access the storage account, as it can now retrieve the connection string from its configuration settings.&lt;/p&gt;

&lt;p&gt;Assigning the connection string to an App Setting ensures that the connection string is securely stored, and not exposed in the application code. It also provides an easy way to update the connection string if needed, as it can be modified in the Web app configuration without having to redeploy the application code.&lt;/p&gt;

&lt;p&gt;To do this, run the below command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;webapp&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="nx"&gt;appsettings&lt;/span&gt; &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$webapp&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;settings&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;STORAGE_CONNSTR=$connstr&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After carrying out the above processes, check the configuration settings under the Web App in App Services, you should see the applications settings (connection strings) we have configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw6hxzbib1kw00k0i8t4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw6hxzbib1kw00k0i8t4.png" alt="Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that is how you connect an App Service to an Azure SQL Database and a Storage Account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resource Clean up
&lt;/h2&gt;

&lt;p&gt;It is important to clean up resources that were created to avoid unnecessary charges after tests. Fortunately, all resources created during this process are tied to the same Resource Group, so they can be terminated with a single command. This makes cleanup quick and easy.&lt;/p&gt;

&lt;p&gt;To perform the clean up, run the below command:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="nx"&gt;az&lt;/span&gt; &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="nx"&gt;$resourceGroup&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;This article provided a comprehensive guide on how to connect an App Service app to both an SQL Database and a Storage Account. SQL Databases are essential for storing structured data such as user information, while Storage Accounts are used for storing unstructured data like media files or logs. &lt;/p&gt;

&lt;p&gt;By following the steps outlined in this article, you can easily establish a connection between your App Service app and these services, allowing your app to access and manipulate data in a secure and efficient manner.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>sql</category>
      <category>storage</category>
    </item>
    <item>
      <title>Using the Linux Free Command With Examples</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Tue, 04 Apr 2023 04:30:29 +0000</pubDate>
      <link>https://forem.com/danielfavour/using-the-linux-free-command-with-examples-4lkc</link>
      <guid>https://forem.com/danielfavour/using-the-linux-free-command-with-examples-4lkc</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I wrote this article originally for &lt;a href="https://www.turing.com/kb/how-to-use-the-linux-free-command" rel="noopener noreferrer"&gt;Turing&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Regular memory checks are crucial to maintaining the performance of your Linux system. Knowing your system's memory can also aid in debugging and preventing poor application response times while running memory-intensive programs.&lt;/p&gt;

&lt;p&gt;There are several ways to view the RAM on your Linux system using the command line, but for the scope of this article, we will focus on the 'free' command.&lt;/p&gt;

&lt;p&gt;In this article, we will discuss what the Linux free command is, how it works, and the benefits of the command.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the &lt;code&gt;free&lt;/code&gt; command?
&lt;/h2&gt;

&lt;p&gt;The free command is a &lt;a href="https://www.turing.com/jobs/remote-linux-developer" rel="noopener noreferrer"&gt;Linux&lt;/a&gt; command that allows you to check for memory RAM on your system or to check the memory statics of the Linux operating system.&lt;/p&gt;

&lt;p&gt;To view your system's memory statics, run the free command in your terminal as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free

          total    used      free   shared   buff/cache  available
Mem:    8029356  794336   6297928   183384       937092    6816804
Swap:         0       0         0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Understanding the output
&lt;/h2&gt;

&lt;p&gt;The output of the &lt;code&gt;free&lt;/code&gt; command provides various metrics related to system memory, including total, used, free, shared, buff/cache, and available. To gain a comprehensive understanding of these fields, it is important to examine each one individually. To begin, we'll focus on the 'Mem' metric, which represents the total amount of physical RAM installed on the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examining the &lt;code&gt;Mem&lt;/code&gt; metrics and its fields
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;Mem&lt;/code&gt; metric, as displayed in the output of the &lt;code&gt;free&lt;/code&gt; command, serves as a measurement of the physical RAM (Random Access Memory) installed on a computer system. It offers a comprehensive view of the total amount of installed RAM and its current usage by running programs and processes. By monitoring this metric, it is possible to detect any potential memory-related issues within the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcljdnlrqzgggjrqaege.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcljdnlrqzgggjrqaege.png" alt="Mem metrics" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Mem metric also includes several fields that give an overview of the system's memory usage, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Total&lt;/strong&gt;: This is the total amount of physical RAM on your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Used&lt;/strong&gt;: This shows the amount of memory that has been used up or amount of RAM that is currently being utilized by running programs and processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Free&lt;/strong&gt;: This is the amount of physical memory that is not currently being used by any running processes and is ready to be allocated to new processes. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shared&lt;/strong&gt;: This displays the total amount of memory used by the temporary &lt;code&gt;tmpfs&lt;/code&gt; file system. &lt;code&gt;Tmpfs&lt;/code&gt; is a file system that stores files in the computer's main memory (RAM) making it faster to access compared to traditional storage methods like a hard drive. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Buff/cache&lt;/strong&gt;: This is the memory that the kernel (operating system) uses to store recently used data so that it can be accessed quickly. It is used to speed up the performance of the computer by reducing the amount of time it takes to access data from the hard drive. Think of it like a temporary storage area where the computer stores data that it might need soon, so that it doesn't have to search for it again later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Available&lt;/strong&gt;: This shows an estimated value of how many memory resources are still open for use. This value can fluctuate as processes start and stop, and memory is freed up and allocated. So, while it may not actively be used by a process at the moment, it is still available to be allocated to a process if needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Free and available memory can be tricky to understand. Think of free memory as empty rooms in your house that are ready to be occupied, representing the amount of physical memory that is not currently being used by any running processes and is ready to be allocated to new processes. You can also think of available memory as the total number of rooms that can be occupied including the empty ones and the ones in use for caching and buffering.&lt;/p&gt;

&lt;p&gt;An example of caching would be storing items in a storage room that you frequently use, so that you can easily access them when you need them. Buffering would be like having a guest room ready to be used in case you have unexpected visitors. Both the storage room and guest room are being used, but they are still considered "available" because they can be used for their intended purpose if necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examining the &lt;code&gt;swap&lt;/code&gt; metrics and its fields
&lt;/h3&gt;

&lt;p&gt;The swap metric shows the amount of swap space that is currently being used and the amount of swap space that is available for use.&lt;/p&gt;

&lt;p&gt;Swap, also known as virtual memory, is a mechanism that enables computer systems to use extra memory by creating a file or partition on a storage volume. This serves as a backup option when the system's physical RAM is full and can't accommodate new processes. The operating system transfers data from RAM to the swap space, allowing the system to continue running smoothly. &lt;/p&gt;

&lt;p&gt;The swap metric also includes several fields that give an overview of the system's memory usage, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;total&lt;/strong&gt;: The size of the swap partition or swap file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;used&lt;/strong&gt;: The amount of swap space in use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;free&lt;/strong&gt;: The remaining (unused) swap space&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Referring back to the previous output gotten when the free command was run, the output fields for swap resulted as zero (0). This entails that there is no swap space made available on the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free command options
&lt;/h2&gt;

&lt;p&gt;The free command can be tailored to show memory usage in any desired format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get the memory information in bytes, add the &lt;code&gt;-b&lt;/code&gt; option to the command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free -b

             total         used         free      sharedbuff/cache     
  available
Mem:    8222060544    700334080   6214823936   188076032     1306902528    7087677440
Swap:         0       0         0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To get the memory information in kilobytes, add the &lt;code&gt;-k&lt;/code&gt; option to the command. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the free command is used alone, as was demonstrated earlier, without any option, the default displayed of the memory is in kilobytes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free -k
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To view the memory information in megabytes, add the &lt;code&gt;-m&lt;/code&gt; option to the command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free -m

          total    used    free   shared    buff/cache   available
Mem:       7841     668    5925      179          1247        6759
Swap:         0       0         0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To view the memory information in gigabytes, add the &lt;code&gt;-g&lt;/code&gt; option to the command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free -g

          total    used    free   shared    buff/cache   available
Mem:          7       0       5        0             1           6
Swap:         0       0       0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To view the memory information in human-readeable format, add the &lt;code&gt;-h&lt;/code&gt; option to the command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@ubuntu:~$ free -h

         total     used    free   shared   buff/cache   available
Mem:     7.7Gi    675Mi   5.8Gi    179Mi        1.2Gi       6.6Gi
Swap:       0B       0B      0B

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benefits of the &lt;code&gt;free&lt;/code&gt; command in Linux
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8hvcklzdsx753g3jqpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8hvcklzdsx753g3jqpj.png" alt="Benefits of free command" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;free&lt;/code&gt; command is an invaluable tool for managing and monitoring memory usage. In this section, we will delve into the benefits of utilizing this command.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Display current memory usage&lt;/strong&gt;&lt;br&gt;
Running the free command without any arguments will display the current amount of used and available memory on the system, as well as the amount of memory being used for system buffers and disk cache. This helps you to know processes eating up your systems storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor memory usage over time&lt;/strong&gt;&lt;br&gt;
By using the free command in combination with the watch command, you can display the current memory usage at regular intervals. For example, "watch -n 5 free -m" will display the current memory usage every 5 seconds. This can be useful for identifying patterns in memory usage over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identify memory leaks&lt;/strong&gt;&lt;br&gt;
If the "used" column of the free command output is consistently high, it may indicate a memory leak in one of the running programs. By running the command periodically and checking the used memory, you can identify the process that's causing the leak.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check for high buffer/cache usage&lt;/strong&gt;&lt;br&gt;
If the "buffers" and "cached" columns are consistently high, it may indicate that the system is using a lot of memory for caching. While this is generally not a problem, it can cause slow performance if the system is low on memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Display memory usage in different units&lt;/strong&gt;&lt;br&gt;
By using the -m or -g options, you can display the memory usage in megabytes or gigabytes, respectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Comparison with other command-line tools&lt;/strong&gt;&lt;br&gt;
The free command can be used in conjunction with other command-line tools like top, htop, and vmstat, to provide a more complete picture of the system's memory usage and performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we discussed the free command, how it works and its benefits.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;free&lt;/code&gt; command in Linux is a useful tool for monitoring the system's memory usage as well as a valuable tool for managing and optimizing the performance of Linux systems. Understanding the output of the free command can help administrators and users identify potential memory bottlenecks and troubleshoot performance issues.&lt;/p&gt;

</description>
      <category>tailwindcss</category>
    </item>
    <item>
      <title>OpenTelemetry vs Datadog - Choosing between OpenTelemetry and Datadog</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Mon, 27 Mar 2023 15:58:25 +0000</pubDate>
      <link>https://forem.com/danielfavour/opentelemetry-vs-datadog-choosing-between-opentelemetry-and-datadog-2c84</link>
      <guid>https://forem.com/danielfavour/opentelemetry-vs-datadog-choosing-between-opentelemetry-and-datadog-2c84</guid>
      <description>&lt;p&gt;OpenTelemetry and DataDog are both used for monitoring applications. While OpenTelemetry is an open source observability framework, DataDog is a cloud-monitoring SaaS service. OpenTelemetry is a collection of tools, APIs, and SDKs that help generate and collect telemetry data (logs, metrics, and traces).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oSj4yj_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1mihko762fm75lurlvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oSj4yj_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1mihko762fm75lurlvq.png" alt="cover image" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenTelemetry does not provide a storage and visualization layer, while DataDog does. If you’re using OpenTelemetry, you need an observability backend like SigNoz or DataDog to visualize and store the collected telemetry data.&lt;/p&gt;

&lt;p&gt;So why do you need to use OpenTelemetry at all? DataDog provides agents to instrument applications and can be used as an end-to-end solution. But more and more &lt;a href="https://tech.ebayinc.com/engineering/why-and-how-ebay-pivoted-to-opentelemetry/"&gt;companies&lt;/a&gt; are moving to OpenTelemetry for their observability setup. There are many reasons why companies are moving to OpenTelemetry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use OpenTelemetry?
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is quietly becoming the world standard for instrumenting cloud-native applications. Here are some reasons why people prefer OpenTelemetry over native vendor agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Part of the CNCF landscape&lt;/strong&gt;
OpenTelemetry is part of the &lt;a href="https://www.cncf.io/"&gt;Cloud Native Computing Foundation&lt;/a&gt;, so it would work well with other tools in the CNCF landscape. It is the second most active project after Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No vendor lock-in&lt;/strong&gt;
If you use OpenTelemetry, you can avoid vendor lock-ins with SaaS services. The data collected by OpenTelemetry can be sent to multiple backends. Most observability vendors support the OTLP data format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proof instrumentation&lt;/strong&gt;
OpenTelemetry has a wide community working on it to support the instrumentation of a wide range of libraries, frameworks, and languages. If you use an instrumentation SDK from a vendor, you are susceptible to the vendor’s support of emerging technologies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Company knowledge base and easy onboarding&lt;/strong&gt;
Using OpenTelemetry, you can have a standard observability setup in place at your company. Over time, the knowledge base will improve, and it will be easier to onboard new members of the observability team. In case the team decides to switch the vendor, it is easy to make configuration changes for the backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In an observability stack, the instrumentation layer is most tightly coupled with your application as it involves code changes. Using OpenTelemetry, you can have peace of mind about having standardized observability set up in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences between OpenTelemetry and DataDog
&lt;/h2&gt;

&lt;p&gt;Let us explore the key differences between OpenTelemetry and Datadog.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Collection
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry provides language-specific client libraries and SDKs for instrumenting applications, services, and infrastructure. The OpenTelemetry client libraries can be used to collect telemetry data from applications written in various programming languages, and they provide a vendor-neutral, standardized way to collect and export telemetry data.&lt;/p&gt;

&lt;p&gt;On the other hand, Datadog relies on its own agent to collect data from applications, infrastructure, and other services. The Datadog agent can be installed on hosts, containers, and other environments to collect metrics, traces, and logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customization and flexibility
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry is an open-source framework, meaning developers can access and customize the source code to meet their specific needs. This makes it easier for developers to integrate OpenTelemetry into their existing systems and workflows.&lt;/p&gt;

&lt;p&gt;In contrast, Datadog is a closed-source platform with limited customization options. While it offers a wide range of integrations with other tools and services, it can be more challenging for developers to modify and adapt Datadog to their specific needs. This can be a limiting factor for teams that require a high degree of flexibility and customization in their observability practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Storage
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry does not provide any data storage capabilities on its own. Instead, you need to choose an observability backend to store and analyze the telemetry data collected by OpenTelemetry.&lt;/p&gt;

&lt;p&gt;In contrast, Datadog provides a comprehensive cloud-based platform that includes built-in data storage capabilities. This means you can use Datadog for data collection and storage, eliminating the need for a separate observability backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with other tools
&lt;/h3&gt;

&lt;p&gt;One of the key advantages of OpenTelemetry is its vendor-neutral approach, which allows users to send data to any observability backend of their choice. This provides users with greater flexibility and the ability to customize their observability stack according to their needs.&lt;/p&gt;

&lt;p&gt;On the other hand, DataDog is a closed SaaS platform that provides its own set of APIs and SDKs to collect and analyze telemetry data. While it offers integrations with various third-party tools and services, it is not as flexible as OpenTelemetry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Community
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry is a Cloud Native Computing Foundation (&lt;a href="https://www.cncf.io/"&gt;CNCF&lt;/a&gt;) project, which means it is an open-source project with a thriving community of contributors and users. Community support means that developers can access a wealth of resources, including documentation, tutorials, and support forums, to help them adopt and use OpenTelemetry effectively.&lt;/p&gt;

&lt;p&gt;On the other hand, while DataDog offers support to its users, most of it is paid support. This means that developers may have limited resources available to them if they encounter issues or need help using DataDog. However, DataDog offers various paid support options, including phone and email support, as well as a knowledge base and community forums.&lt;/p&gt;

&lt;p&gt;Now that we have discussed the differences, it’s time to answer what to choose between OpenTelemetry and DataDog. DataDog also provides support for OpenTelemetry. But the extent of this support is debatable. There has been some recent controversy regarding DataDog’s support of OpenTelemetry codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kX8uTNCm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vij89p7u06tfdfmw361n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kX8uTNCm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vij89p7u06tfdfmw361n.png" alt="https://news.ycombinator.com/item?id=34540419" width="683" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since OpenTelemetry does not provide a backend, the question really is to choose between an OpenTelemetry native APM and DataDog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing between an OpenTelemetry APM and DataDog
&lt;/h2&gt;

&lt;p&gt;An OpenTelemetry native APM support OTLP data format natively and treats OTel data format as its primary format for ingestion. SigNoz is an open source observability platform that supports OpenTelemetry &lt;a href="https://signoz.io/blog/opentelemetry-apm/"&gt;natively&lt;/a&gt;. Its &lt;a href="https://signoz.io/docs/#architecture"&gt;architecture&lt;/a&gt; involves OpenTelemetry client libraries and OpenTelemetry collectors to generate and collect telemetry data.&lt;/p&gt;

&lt;p&gt;Some of the key reasons to choose SigNoz are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SigNoz is open source and supports OTLP as its primary data format.&lt;/li&gt;
&lt;li&gt;Metrics, Traces, and Logs in a single pane&lt;/li&gt;
&lt;li&gt;Correlation across different signals&lt;/li&gt;
&lt;li&gt;Powerful aggregation and aggregation capabilities on high cardinality data&lt;/li&gt;
&lt;li&gt;It can be run within your own cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started with SigNoz
&lt;/h2&gt;

&lt;p&gt;SigNoz can be installed on macOS or Linux computers in just three steps by using a simple install script.&lt;/p&gt;

&lt;p&gt;The install script automatically installs &lt;a href="https://docs.docker.com/engine/install/"&gt;Docker Engine&lt;/a&gt; on Linux. However, on macOS, you must manually install Docker Engine before running the install script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;git&lt;/span&gt; &lt;span class="nx"&gt;clone&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/SigNoz/signoz.git&amp;gt;&lt;/span&gt;
&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="nx"&gt;signoz&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can visit our documentation for instructions on how to install SigNoz using Docker Swarm and Helm Charts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QU-SVZgu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5squoiktvkbmwnt39hur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QU-SVZgu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5squoiktvkbmwnt39hur.png" alt="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5squoiktvkbmwnt39hur.png" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out SigNoz GitHub repo here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R_HgeVg6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmablpw46ylkznbg2rij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R_HgeVg6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmablpw46ylkznbg2rij.png" alt="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmablpw46ylkznbg2rij.png" width="708" height="162"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Related Posts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/open-source-datadog-alternative"&gt;https://signoz.io/blog/open-source-datadog-alternative&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Complete Guide on Docker Logs [All access methods included]</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Fri, 10 Feb 2023 01:44:49 +0000</pubDate>
      <link>https://forem.com/danielfavour/complete-guide-on-docker-logs-all-access-methods-included-55gi</link>
      <guid>https://forem.com/danielfavour/complete-guide-on-docker-logs-all-access-methods-included-55gi</guid>
      <description>&lt;p&gt;Docker logs play a critical role in the management and maintenance of containerized applications. They provide valuable information about the performance and behavior of containers, allowing developers and administrators to troubleshoot issues, monitor resource usage, and optimize application performance. By capturing and analyzing log data, organizations can improve the reliability, security, and efficiency of their containerized environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5086kgbq379uq4mxz1pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5086kgbq379uq4mxz1pd.png" alt="cover image" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will explore the concept of Docker logs, the types, and how they can be accessed and effectively managed for optimal use in a containerized environment.&lt;/p&gt;

&lt;p&gt;Before diving into what Docker logs are, we need to first understand the concept of Docker and containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  A brief overview of Docker
&lt;/h2&gt;

&lt;p&gt;Docker is a cutting-edge platform for building, distributing, and operating distributed applications. With this technology, developers are able to package their applications and dependencies into containers. These containers can run on any system equipped with a Docker engine.&lt;/p&gt;

&lt;p&gt;Containers are lightweight and portable software packages that contain everything needed to run the application, including the code, runtime, system tools, libraries, and settings. This containerization process allows for seamless deployment and scaling of applications across various environments, leading to consistency and reproducibility in the development process.&lt;/p&gt;

&lt;p&gt;Throughout the development of an application, different components might require different operating systems or configurations. By using Docker, developers can create containers for each component and specify necessary dependencies and configurations. This way, the application can function consistently across different environments, making it easier to test, debug, and deploy. Additionally, since containers are isolated from one another, changes made to a single container will not affect other components or the host operating system. This is the basic concept behind how Docker and containers work.&lt;/p&gt;

&lt;p&gt;Now let us understand what docker logs are.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Docker Logs?
&lt;/h2&gt;

&lt;p&gt;Docker logs refer to the records of events and messages generated by a Docker container or Docker engine. These logs provide insight into the activities and operations of a container, including its start and stop events, output messages, and error messages.&lt;/p&gt;

&lt;p&gt;The collection and analysis of Docker logs play an essential role in monitoring, troubleshooting, and maintaining the stability and performance of Docker containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Docker Logs
&lt;/h2&gt;

&lt;p&gt;In this section, we will delve into the various types of Docker logs that are generated as a result of container activity. They are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Logs
&lt;/h3&gt;

&lt;p&gt;Container logs are records of the standard output and error streams generated by a containerized application. They contain any messages or errors produced by the application as it runs and can be used for troubleshooting and monitoring purposes.&lt;/p&gt;

&lt;p&gt;Logs can be collected through various methods and viewed using tools such as the &lt;code&gt;docker logs&lt;/code&gt; command or a centralized logging platform like &lt;a href="https://signoz.io/docs/userguide/logs/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt;. These logs are essential in ensuring the proper functioning and performance of the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Daemon Logs
&lt;/h3&gt;

&lt;p&gt;Daemon logs are records of events and messages generated by background processes known as Daemons. These processes perform system-level tasks, such as managing network connections and services or executing scheduled tasks. These logs contain information about the daemon's activities, including status updates, error messages, and performance metrics, and are useful for debugging, monitoring, and auditing purposes.&lt;/p&gt;

&lt;p&gt;The logs are typically stored in text files and can be viewed and analyzed using tools like the system's log viewer or a centralized logging platform. These logs play an essential role in ensuring the proper functioning and stability of a computer's background processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methods of Accessing Docker Logs
&lt;/h2&gt;

&lt;p&gt;Gaining insight into the activity and performance of your Docker containers is essential for ensuring the smooth operation of your applications. In this section, we will explore the various methods available for accessing Docker logs, which include:&lt;/p&gt;

&lt;h3&gt;
  
  
  Accessing Docker logs with Docker CLI
&lt;/h3&gt;

&lt;p&gt;The Docker Command Line Interface (CLI) provides a means for users to interact with Docker components, including containers, images, networks, and more. To access logs generated by Docker containers, the docker logs command can be utilized.&lt;/p&gt;

&lt;p&gt;The syntax for accessing docker logs with the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker logs [OPTIONS] CONTAINER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CONTAINER&lt;/code&gt; is the name or ID of the container that you want to view the logs of.&lt;br&gt;
&lt;code&gt;OPTIONS&lt;/code&gt; is an optional flag that you can use to specify the details of the logs that you want to retrieve. To explore different options available for usage, refer to Docker's &lt;a href="https://docs.docker.com/engine/reference/commandline/logs/#options" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The docker logs command retrieves and displays the logs generated by a container in the console. The logs can be viewed in real-time or after the container has stopped. By default, the command retrieves all logs produced by the container, however, the user has the option to specify a time range or limit the number of logs displayed. This provides a flexible approach to accessing and reviewing the logs of a container. The docker logs command is a valuable tool for debugging and monitoring the performance of Docker containers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Accessing Docker Logs with Docker API
&lt;/h3&gt;

&lt;p&gt;The Docker Application Programming Interface (API) enables developers to access and manage Docker components programmatically, including containers, images, networks, and more. The API also provides access to logs generated by Docker containers. A variety of options are available for retrieving logs through the API, including the ability to retrieve logs for a specific container, view logs in real time, and limit the logs to a specific time range or a number of lines. We will look at these options now.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Retrieving logs from a specific container:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can use an API endpoint to retrieve logs from a Docker container. Here is a basic example using curl:&lt;/p&gt;

&lt;p&gt;First, obtain the container ID using the below command;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, retrieve the logs from the container using the API endpoint, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --unix-socket /var/run/docker.sock http:/containers/&amp;lt;CONTAINER_ID&amp;gt;/logs?stderr=1&amp;amp;stdout=1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You will need to replace &lt;code&gt;&amp;lt;CONTAINER_ID&amp;gt;&lt;/code&gt; with the actual ID of the container you want to retrieve logs from.&lt;/p&gt;

&lt;p&gt;In the above example, the &lt;code&gt;stderr&lt;/code&gt; and &lt;code&gt;stdout&lt;/code&gt; parameters are set to 1 to retrieve both standard output and standard error logs. If you only want to retrieve logs from one of these sources, you can set the corresponding parameter to 0.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;View logs in real-time:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following example shows how to retrieve logs in real-time using the Docker API. The &lt;code&gt;follow&lt;/code&gt; query parameter is set to true to enable real-time logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --unix-socket /var/run/docker.sock &amp;lt;http://v1.40/containers/container_id/logs?stdout=true&amp;amp;stderr=true&amp;amp;follow=true&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Limiting the logs to a specific time range or number of lines:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following example shows how to limit the logs to a specific time range using the Docker API. The &lt;code&gt;since&lt;/code&gt; query parameter is used to specify the start time, while the until parameter is used to specify the end time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --unix-socket /var/run/docker.sock &amp;lt;http://v1.40/containers/container_id/logs?stdout=true&amp;amp;stderr=true&amp;amp;since=2022-01-01T00:00:00Z&amp;amp;until=2022-12-31T23:59:59Z&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, the following example shows how to limit the logs to a specific number of lines using the &lt;code&gt;tail&lt;/code&gt; query parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --unix-socket /var/run/docker.sock &amp;lt;http://v1.40/containers/container_id/logs?stdout=true&amp;amp;stderr=true&amp;amp;tail=100&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logging Drivers
&lt;/h3&gt;

&lt;p&gt;Logging drivers are plugins in the Docker ecosystem that provide a means to redirect logs generated by Docker containers to various log storage destinations, such as local or remote files, centralized log servers, or cloud-based logging services. The logging driver is configured on the Docker daemon and determines the method used to collect and store logs from containers.&lt;/p&gt;

&lt;p&gt;Docker has several built-in logging drivers, including JSON files (default logging driver), Syslog, Journald, and Fluentd, each with its advantages and disadvantages. You can check out more &lt;a href="https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers" rel="noopener noreferrer"&gt;logging drivers&lt;/a&gt; available. Learn how to configure a Docker daemon to a logging driver from this &lt;a href="https://signoz.io/blog/docker-logging/#configure-a-docker-container-to-use-a-logging-driver" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The JSON File logging driver, for example, stores logs as JSON objects in a local file, which is useful for debugging, while the Syslog logging driver forwards logs to a remote syslog server for centralized log management. The Journald logging driver sends logs to the local system's journal, and the Fluentd logging driver forwards logs to a Fluentd log collector.&lt;/p&gt;

&lt;p&gt;To set up Syslog as your logging driver, refer to this &lt;a href="https://signoz.io/blog/docker-syslog/#setting-up-docker-syslog" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collecting Docker logs with a Log Analytics tool
&lt;/h3&gt;

&lt;p&gt;Log analytics tools provide a way to collect, process, and store logs generated by Docker containers. These solutions provide a number of benefits over using built-in logging drivers, including advanced log analysis, centralized log management, and more robust logging capabilities.&lt;/p&gt;

&lt;p&gt;Log analytics tools can be used in combination with the Docker API and logging drivers to provide a complete logging solution for Docker containers. For example, logs from containers can be sent to an external log server using a logging driver, and then analyzed and visualized using a logging solution like &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Managing Docker Logs
&lt;/h2&gt;

&lt;p&gt;Efficient log management is key to ensuring the performance and reliability of containerized applications. In this section, we will explore various strategies for managing Docker logs. They are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Establishing Log Retention Policies
&lt;/h3&gt;

&lt;p&gt;A well-defined log retention policy is a cornerstone of effective log management. This policy outlines the length of time that logs will be stored and when they will be deleted. By establishing such a policy, organizations can ensure that logs do not consume excessive disk space and that relevant information is readily available for debugging and analysis purposes. In addition, a retention policy helps to streamline the log management process and enables organizations to retain only the data that is necessary for their specific needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Log Rotation
&lt;/h3&gt;

&lt;p&gt;Log rotation is an essential aspect of log management that helps to organize logs and conserve disk space by periodically moving older logs to a separate file and writing new logs to the current log file. Effective log rotation configuration requires consideration of factors such as log size, frequency of generation, and retention requirements. For example, some logs may need to be kept for an extended period of time due to compliance or regulatory requirements, while others may have a shorter retention period. To configure log rotation for your Docker containers, refer to this guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleaning Logs
&lt;/h3&gt;

&lt;p&gt;Maintaining an organized log infrastructure requires regular cleaning of logs to prevent disk space from being consumed by unnecessary data. This involves identifying and removing logs that have reached their established retention period and are no longer needed.&lt;/p&gt;

&lt;p&gt;Cleaning logs also helps to ensure that logs are easily accessible and readable when they are needed for debugging and analysis. If logs are not cleaned regularly, they can become cluttered and difficult to navigate, making it difficult to quickly find the information needed to diagnose issues with containers and applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Archiving Logs for Long-Term Storage
&lt;/h3&gt;

&lt;p&gt;Archiving logs for long-term storage is a crucial aspect of log management as it ensures that valuable information is retained for future reference. This may include logs needed for auditing, compliance, and forensic purposes. Archived logs can be stored in a separate server or a cloud storage solution to ensure they are secure and easily accessible.&lt;/p&gt;

&lt;p&gt;It is important to follow a well-defined archiving process that includes regularly backing up logs and ensuring their availability and integrity over time. Additionally, it is advisable to have a disaster recovery plan in place to ensure that logs are not lost in case of any unexpected events.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring Logs
&lt;/h3&gt;

&lt;p&gt;Proactive monitoring of logs is a key aspect in ensuring the stability and efficiency of containerized applications. Through regular inspection of logs, potential problems can be detected early, allowing for prompt resolution.&lt;/p&gt;

&lt;p&gt;This can be accomplished by utilizing the Docker CLI, the Docker API, or leveraging external logging solutions. By implementing a consistent log monitoring process, the reliability and performance of containerized applications can be greatly improved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Logs Analysis with SigNoz
&lt;/h2&gt;

&lt;p&gt;By leveraging a log analytics tool like SigNoz, organizations can benefit from advanced log management capabilities, including real-time log collection, aggregation, and analysis, as well as centralized log storage and retrieval.&lt;/p&gt;

&lt;p&gt;SigNoz is a full-stack open-source solution for Application Performance Monitoring that streamlines the process of monitoring logs, metrics, and traces. Log management is a crucial aspect of observability, and SigNoz offers a wide range of tools to help you manage, collect, and analyze logs generated by Docker containers.&lt;/p&gt;

&lt;p&gt;The tool leverages the power of ClickHouse, a high-performance columnar database, to store and access log data for efficient analysis. Moreover, SigNoz adopts the latest standard for instrumenting cloud-native applications, OpenTelemetry which is backed by CNCF.&lt;/p&gt;

&lt;p&gt;The logs tab in SigNoz is packed with advanced features that streamline the process of analyzing logs. Features such as a log query builder, search across multiple fields, structured table view, and JSON view make the process of analyzing Docker logs easier and more efficient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5dp3il2x9dg1fekdyhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5dp3il2x9dg1fekdyhi.png" alt="Log management in Signoz" width="800" height="457"&gt;&lt;/a&gt; &lt;em&gt;Log management in SigNoz&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;SigNoz offers real-time analysis of logs, enabling you to search, filter, and visualize them as they are generated. This can assist in identifying patterns, trends, and problems in the logs and resolving issues efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqe1530s5q0fassuc3hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqe1530s5q0fassuc3hw.png" alt="Live Tail Logging in SigNoz" width="800" height="510"&gt;&lt;/a&gt; &lt;em&gt;Live Tail Logging in SigNoz&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the advanced Log Query Builder, you can filter out logs quickly with a mix and match of fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35960dnb3vice9z2e613.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35960dnb3vice9z2e613.png" alt="Advanced Log Query Builder in SigNoz" width="800" height="317"&gt;&lt;/a&gt; &lt;em&gt;Advanced Log Query Builder in SigNoz&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with SigNoz
&lt;/h2&gt;

&lt;p&gt;SigNoz can be installed on macOS or Linux computers in just three steps by using a simple install script.&lt;/p&gt;

&lt;p&gt;The install script automatically installs &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker Engine&lt;/a&gt; on Linux. However, on macOS, you must manually install Docker Engine before running the install script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone -b main &amp;lt;https://github.com/SigNoz/signoz.git&amp;gt;
cd signoz/deploy/
./install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can visit our documentation for instructions on how to install SigNoz using Docker Swarm and Helm Charts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/docs/install/docker/?utm_source=blog&amp;amp;utm_medium=opentelemetry_springboot" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnx6uyuojehnq4kplqk8.png" alt="deploy_docker_documentation" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you liked what you read, then check out our GitHub repo 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/SigNoz/signoz" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl11retc5gn6d9vaov72w.png" alt="signoz_github" width="708" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Docker logs are critical components of managing and maintaining the health of Docker containers and their applications. By leveraging the power of Docker logs, organizations can optimize their container-based infrastructure and improve their ability to troubleshoot, analyze, and monitor the performance of their applications.&lt;/p&gt;

&lt;p&gt;It is important to have a robust and efficient log management strategy in place, as it helps to ensure that logs are captured, stored, and analyzed effectively. Adopting a dedicated log management tool, instead of relying solely on the native methods of accessing and managing Docker logs, can provide a range of advanced features and greater flexibility for analyzing and processing logs from your containers.&lt;/p&gt;

&lt;p&gt;Log management tools like SigNoz can enhance the capability to handle logs generated by Docker containers. It offers a comprehensive and scalable approach to log analytics that can cater to specific needs and requirements, which might not be met by the native Docker logging options like logging drivers or the Docker API. For instance, advanced log parsing, filtering, or transforming functions that are not feasible using just the logging drivers or the Docker API can be carried out with SigNoz.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Related Posts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/docker-logging/" rel="noopener noreferrer"&gt;Docker Logging Complete Guide (Configuration &amp;amp; Logging strategies) | SigNoz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/docker-syslog/#setting-up-docker-syslog" rel="noopener noreferrer"&gt;Configure your Docker Syslog Logging Driver | SigNoz&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Cloud Storage in AWS</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Tue, 31 Jan 2023 09:37:18 +0000</pubDate>
      <link>https://forem.com/aws-builders/cloud-storage-in-aws-2j89</link>
      <guid>https://forem.com/aws-builders/cloud-storage-in-aws-2j89</guid>
      <description>&lt;p&gt;Cloud storage is a rapidly growing technology that provides users with the ability to store and access their data over the internet. With the increasing demand for cloud storage, the market is flooded with different types of cloud storage solutions, each with its unique set of features and benefits.&lt;/p&gt;

&lt;p&gt;In this article, we will discuss what cloud storage is, types of data storage in the cloud, and storage services offered by AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Storage Defined
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Cloud storage is a cloud computing model that stores data on the Internet through a cloud computing provider who manages and operates data storage as a service. It’s delivered on demand with just-in-time capacity and costs, and eliminates buying and managing your own data storage infrastructure. This gives you agility, global scale and durability, with “anytime, anywhere” data access.&lt;/em&gt; - &lt;strong&gt;AWS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud storage is a form of data storage that entails the use of remote servers, which can be accessed over the internet, rather than relying on local servers or personal computers. A cloud storage provider manages, maintains, and backs up the stored data, making it possible for users to access their data from any device connected to the internet. &lt;/p&gt;

&lt;p&gt;This approach to data storage offers several key benefits, such as scalability, accessibility, and disaster recovery, which are attractive to individuals and organizations seeking to store large amounts of data. In addition, cloud storage can be utilized for a variety of purposes, including backups, file sharing, and big data analytics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Data Storage in the Cloud
&lt;/h2&gt;

&lt;p&gt;Choosing the right type of data storage in the cloud can be a complex task, as each option is designed to serve a specific purpose. To help make informed decisions about data storage needs, it's important to understand the different types of cloud storage available. In this section, we will dive into the 3 main types of data storage used in the cloud, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Object storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;File storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Block storage&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Object Storage
&lt;/h3&gt;

&lt;p&gt;Object storage is a data storage architecture that operates by managing data as individual objects, as opposed to the traditional file system or block-level storage approach. Object storage uses a unique identifier, known as a key, to store objects. The contents of an object store can be distributed across multiple servers, which increases availability and durability. Additionally, object storage data may be replicated across multiple data centers, providing easy access through simple web service interfaces.&lt;/p&gt;

&lt;p&gt;Characterized by its ability to offer unlimited scalability, object storage is an efficient solution for the storage of vast amounts of unstructured data, such as images, videos, backups, and archives. The utilization of object storage results in cost-effectiveness, limitless scalability, exceptional durability, and heightened accessibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Storage
&lt;/h3&gt;

&lt;p&gt;File storage refers to the process of storing and organizing digital files on a computer or network-attached storage (NAS) device. This system employs a hierarchical file organization, consisting of directories, subdirectories, and individual files, each with its distinct name and the potential to contain a variety of data, including text, images, audio, and video. File storage may be either local or remote and can be accessed through either a file transfer protocol (FTP) or network file system (NFS).&lt;/p&gt;

&lt;p&gt;File storage solutions are well-suited for the management of large content repositories, development environments, media stores, and user home directories due to their ability to preserve a folder structure and provide network access. The advantages of file storage are extensive, including organization, accessibility, security, and dependability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Block Storage
&lt;/h3&gt;

&lt;p&gt;Block storage is a type of data storage architecture that divides data into blocks, as opposed to being organized in a continuous stream of bytes, and then stored across a system that can be physically distributed to maximize efficiency.&lt;/p&gt;

&lt;p&gt;Each block operates as a separate unit of storage, enabling independent allocation, reading, and writing. This type of storage is often utilized for holding vast quantities of structured data, including databases, virtual machines, and file systems. &lt;/p&gt;

&lt;p&gt;The data is stored on physical storage devices, such as hard drives or solid-state drives, and is either directly connected to a server or accessed via a network. This can be accessed directly through APIs or http/https.&lt;/p&gt;

&lt;p&gt;The primary benefit of block storage is its exceptional performance, making it an ideal solution for applications that demand quick access to large amounts of data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9l6cbvt3xr4xncbhh35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9l6cbvt3xr4xncbhh35.png" alt="Differences" width="302" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Instance Stores
&lt;/h2&gt;

&lt;p&gt;Instance stores, also known as ephemeral storage, are block-level storage volumes that behave like physical hard drives. They are storage disks that are physically attached to the host computer for example, an EC2 instance, and hence has the same lifespan as the instance. The data stored in an instance store persists only during the life of the associated EC2 instance. &lt;/p&gt;

&lt;p&gt;Instance store is ideal for use cases where the data needs to be accessed frequently and quickly, such as for high-performance databases and caching workloads. However, it is important to note that if the underlying host fails, the data stored in an instance store will be lost. Whenever the instance is terminated, the data stored on it is deleted. This makes it unsuitable for long-term data storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage Services offered by AWS
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services (AWS) offers a variety of cloud storage options for businesses and individuals. Understanding the different types of cloud storage in AWS is important for determining the best fit for your storage needs. Here are some of the most popular types of cloud storage in AWS:&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Elastic Block Store (Amazon EBS)
&lt;/h3&gt;

&lt;p&gt;This contradicts the instance store because it stores data for a long time and in a situation that an Amazon EC2 instance is terminated, the data stored on it still remains accessible. It provides block-level storage volumes that work with EC2 instances.&lt;br&gt;
With EBS, you can create virtual hard drives called EBS Volumes, which can be attached to your instance, they aren't tied directly to the host. When you write to an instance volume, data persists.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Simple Storage Service (Amazon S3)
&lt;/h3&gt;

&lt;p&gt;Amazon S3 is an object-level storage service. It offers the ability to store and access data in the form of objects, which are organized into buckets. These objects can be anything from images, texts, videos, or binary files.&lt;/p&gt;

&lt;p&gt;S3 is designed to provide scalable, highly durable, and highly available data storage for a vast array of use cases, such as big data analytics, backup and disaster recovery, and web and mobile applications. The service guarantees data durability through automatic replication across multiple Availability Zones within an AWS region. Additionally, it offers versioning capabilities, enabling users to retain and access multiple iterations of an object over time.&lt;/p&gt;

&lt;p&gt;Amazon S3 features different storage classes, each designed to meet specific data retrieval and availability needs. This allows users to choose the best storage option for their specific requirements. These storage classes include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard:&lt;/strong&gt;&lt;br&gt;
This default storage option for Amazon S3 is optimized for high-performance, low-latency access to frequently used data. It offers exceptional durability and availability, ensuring data is stored across a minimum of three Availability Zones for optimal resiliency. This storage option is ideal for organizations that require quick and reliable access to their data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard-Infrequent Access (S3 Standard-IA):&lt;/strong&gt;&lt;br&gt;
Amazon S3 Infrequent Access is a good option for storing data that is accessed less frequently but requires rapid access when needed. It provides high durability by storing data across a minimum of three Availability Zones, while also offering lower storage prices and higher retrieval prices. This storage class is ideal for backups, disaster recovery files, and other objects that require long-term storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 One Zone-Infrequent Access (S3 One Zone-IA):&lt;/strong&gt;&lt;br&gt;
The S3 One Zone storage option is intended for data that is infrequently accessed and where lower availability and durability is acceptable. This option stores data in a single Availability Zone, offering a lower storage cost compared to S3 Standard-IA while still providing high durability and availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Intelligent-Tiering:&lt;/strong&gt;&lt;br&gt;
The S3 Intelligent-Tiering storage option provides flexible data management by automatically shifting data between two storage classes based on evolving access patterns. For instance, if an object has not been accessed for 30 days, Amazon S3 moves it to the S3 Standard-Infrequent Access storage class. This storage option offers an optimized solution for data with unpredictable access patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier:&lt;/strong&gt;&lt;br&gt;
S3 glacier is a secure, durable and extreme low cost S3 storage The S3 Glacier storage class is designed for data archiving and long-term backup. This storage option is ideal for organizations that need to store data for several years for auditing purposes and do not require rapid retrieval. It provides a cost-effective solution for preserving data over an extended period of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier Deep Archive:&lt;/strong&gt;&lt;br&gt;
Amazon S3 Glacier is a cost-effective, long-term storage solution designed specifically for data archiving and backup. With retrieval times of 12 hours or more, it is the lowest-cost object storage class for archiving. This storage class is ideal for organizations looking to store large amounts of data for extended periods, with a focus on minimizing cost and retrieval time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon Elastic File System (Amazon EFS)
&lt;/h3&gt;

&lt;p&gt;Amazon Elastic File System (EFS) is a cloud-based file storage solution. It is a serverless and scalable file system. Because it is elastic, your file system automatically grows and shrinks as you add and remove files.&lt;/p&gt;

&lt;p&gt;EFS is specifically designed to support use cases that require shared access to data, such as big data processing, content management, and web serving. It enables multiple EC2 instances to access a shared file system simultaneously, making it effortless to share data between instances.&lt;/p&gt;

&lt;p&gt;Amazon EFS offers high levels of availability and durability, as it stores multiple redundant copies of data across multiple Availability Zones within an AWS region. This ensures that the data is always accessible, even in the case of hardware failures or other infrastructure issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instance stores vs AWS Storage classes
&lt;/h2&gt;

&lt;p&gt;Instance stores have distinct characteristics compared to other AWS storage services such as Amazon S3 and Amazon EBS. They are physically attached to the host computer of an EC2 instance, providing block-level storage optimized for rapid access with low latency. In comparison, S3 and EBS are network-attached storage accessed through the internet.&lt;/p&gt;

&lt;p&gt;Instance stores lack persistence, meaning that any data stored in them will be lost if the EC2 instance terminates or its host computer fails. Conversely, data stored in S3 and EBS is automatically replicated and can be accessed persistently. The capacity of instance stores is limited to the physical storage of the host computer, while S3 and EBS offer unlimited capacity.&lt;/p&gt;

&lt;p&gt;Instance stores are suitable for temporary storage of instance-local data that is frequently accessed, such as caching, temporary data processing, or as a scratch disk for high-performance computing applications. On the other hand, S3 and EBS are best for storing critical data that must be persistently stored and accessed over a longer period.&lt;/p&gt;

&lt;p&gt;In summary, instance stores offer fast, low-latency storage for temporary data, and S3 and EBS offer durable, persistent storage for critical data. The choice between the two depends on the specific storage requirements of the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, cloud storage in AWS offers a flexible and scalable solution for storing and accessing data. It provides a range of storage options, including S3, EBS, and EFS, to meet different data storage needs. It also offers features such as data management, security, and disaster recovery, making it a highly secure and reliable solution for businesses and organizations. The pay-as-you-go pricing model and on-demand access to storage resources make AWS a cost-effective choice for organizations of all sizes.&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>web3</category>
      <category>blockchain</category>
      <category>offers</category>
    </item>
    <item>
      <title>7 Open-Source Log Management Tools that you may consider in 2023</title>
      <dc:creator>Daniel Favour</dc:creator>
      <pubDate>Tue, 31 Jan 2023 02:03:31 +0000</pubDate>
      <link>https://forem.com/danielfavour/7-open-source-log-management-tools-that-you-may-consider-in-2023-1d5m</link>
      <guid>https://forem.com/danielfavour/7-open-source-log-management-tools-that-you-may-consider-in-2023-1d5m</guid>
      <description>&lt;p&gt;Effective log management is a fundamental aspect of maintaining and troubleshooting today's complex systems and applications. The sheer volume of data generated by various software and hardware components can make it challenging to identify and resolve issues in a timely manner. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudmzzuhu6q496yzq8icv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudmzzuhu6q496yzq8icv.png" alt="open-source log management cover" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open-source log management tools offer a cost-efficient and customizable approach for collecting, analyzing, and visualizing log data. These tools empower administrators with the ability to swiftly discern patterns and trends within log data, thereby streamlining the diagnosis and resolution of problems.&lt;/p&gt;

&lt;p&gt;In this article, we will take a closer look at some of the most popular open-source log management tools available and explore the features and capabilities of each tool. Whether you are a system administrator, developer, or security professional, this article will provide you with the information you need to choose the best log management solution for your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top 7 open-source log management tools
&lt;/h2&gt;

&lt;p&gt;In this section, we will discuss the top 7 open-source log management tools that have been adopted by organizations. They are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/logstash/" rel="noopener noreferrer"&gt;Logstash&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.graylog.org/" rel="noopener noreferrer"&gt;Graylog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.fluentd.org/" rel="noopener noreferrer"&gt;FluentD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.syslog-ng.com/" rel="noopener noreferrer"&gt;Syslog-ng&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Logwatch&lt;/li&gt;
&lt;li&gt;&lt;a href="https://flume.apache.org/" rel="noopener noreferrer"&gt;Apache Flume&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  SigNoz
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; is a comprehensive, open-source log management and analysis platform that offers a centralized location for the collection, storage, and analysis of log data. Designed to aid organizations in gaining valuable insights into their IT infrastructure, applications, and security, the platform offers real-time visibility, automated troubleshooting, and predictive analytics.&lt;/p&gt;

&lt;p&gt;SigNoz supports the collection of log data from a wide range of sources, including servers, network devices, applications, and cloud services. It uses OpenTelemetry to collect and process log data. OpenTelemetry has quietly become the world standard for instrumenting cloud-native applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuvlp4wmro4i99wda273.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuvlp4wmro4i99wda273.png" alt="Log management in SigNoz" width="800" height="457"&gt;&lt;/a&gt; &lt;em&gt;Log management in SigNoz&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The platform also offers a variety of visualization options, such as charts, graphs, and maps, to aid users in gaining insights into their log data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1c1ct2tumo30he2s9cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1c1ct2tumo30he2s9cn.png" alt="Log tailing in SigNoz" width="800" height="510"&gt;&lt;/a&gt; &lt;em&gt;Live log tailing in SigNoz to keep track of logs in real-time&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, it provides automated alerting and troubleshooting features, enabling organizations to identify and resolve issues quickly.&lt;/p&gt;

&lt;p&gt;Some key features of SigNoz are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection and analysis&lt;/li&gt;
&lt;li&gt;Centralized data storage&lt;/li&gt;
&lt;li&gt;Real-time visibility&lt;/li&gt;
&lt;li&gt;Data visualization&lt;/li&gt;
&lt;li&gt;Alerting and troubleshooting&lt;/li&gt;
&lt;li&gt;Support for integration with other tools and systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read more about SigNoz from its documentation.&lt;br&gt;
&lt;a href="https://signoz.io/docs/install/docker/?utm_source=blog&amp;amp;utm_medium=opentelemetry_springboot" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnx6uyuojehnq4kplqk8.png" alt="deploy_docker_documentation" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Logstash
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/logstash/" rel="noopener noreferrer"&gt;Logstash&lt;/a&gt; is a powerful, open-source log management tool that is part of the &lt;a href="https://www.elastic.co/elastic-stack/" rel="noopener noreferrer"&gt;Elastic Stack&lt;/a&gt; (previously known as the ELK stack). Logstash is capable of collecting and processing logs from a wide range of sources and can output them to a variety of destinations, including Elasticsearch, a search engine, an analytics engine, or a file.&lt;/p&gt;

&lt;p&gt;As a log management tool, Logstash provides a pipeline for collecting, parsing, and processing log data. It ingests log data from various sources, such as files, Syslog, and network inputs, and can parse and process the data using a variety of filters and plugins.&lt;/p&gt;

&lt;p&gt;Capable of handling high volumes of data and heavy loads while maintaining good performance, Logstash can be run as a standalone service or as a distributed system. &lt;strong&gt;Logstash itself does not have a built-in dashboard for viewing logs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;However, it can be used in conjunction with other tools such as &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; and &lt;a href="https://www.elastic.co/kibana/" rel="noopener noreferrer"&gt;Kibana&lt;/a&gt; to create and share interactive visualizations and dashboards of log data collected by Logstash. You can find docs on how to send data collected by Logstash to SigNoz &lt;a href="https://signoz.io/docs/userguide/logstash_to_signoz/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rehp768kadhu9qt8j4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rehp768kadhu9qt8j4z.png" alt="search for logs" width="800" height="449"&gt;&lt;/a&gt; &lt;em&gt;Search for logs with a particular indexed pattern sent from Logstash in Kibana&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some key features of Logstash are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection from various sources&lt;/li&gt;
&lt;li&gt;Parsing and processing of log data&lt;/li&gt;
&lt;li&gt;High performance and scalability&lt;/li&gt;
&lt;li&gt;Output to various destinations&lt;/li&gt;
&lt;li&gt;Multiple platforms support&lt;/li&gt;
&lt;li&gt;Integration with other ELK stack components&lt;/li&gt;
&lt;li&gt;Built-in security features.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Graylog
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.graylog.org/" rel="noopener noreferrer"&gt;Graylog&lt;/a&gt; is an open-source log management and analysis platform designed to collect, store, and analyze large volumes of log data from various sources. Utilizing a pipeline system for data collection and processing, Graylog collects data from various sources, parses, transforms, and enriches it before storing it in a database, allowing for easy searching and analysis via the Graylog web interface, which provides a wide range of visualization options.&lt;/p&gt;

&lt;p&gt;In addition to its robust data collection and processing capabilities, Graylog also offers alerting capabilities, sending notifications when specific conditions are met such as the encounter of a particular error message. The platform also provides a RESTful API for integration with other tools and systems and can handle large volumes of log data, scaling horizontally by adding more Graylog server nodes to a cluster. &lt;/p&gt;

&lt;p&gt;Graylog supports multiple data inputs and outputs, it can collect data from various sources such as Syslog, GELF, log files, and Windows Event Log, and it can output data to other systems such as Elasticsearch, Apache Kafka, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furseex6dzd0upga8g4ax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furseex6dzd0upga8g4ax.png" alt="Seach configuration" width="800" height="384"&gt;&lt;/a&gt; &lt;em&gt;Search configuration in Graylog&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some key features of Graylog are;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection and analysis&lt;/li&gt;
&lt;li&gt;Data processing pipeline&lt;/li&gt;
&lt;li&gt;Search and analysis capabilities&lt;/li&gt;
&lt;li&gt;Alerting and notifications&lt;/li&gt;
&lt;li&gt;RESTful API&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Multi-data inputs and outputs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  FluentD
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.fluentd.org/" rel="noopener noreferrer"&gt;Fluentd&lt;/a&gt; is a powerful log management tool that offers organizations the flexibility and scalability required to handle large volumes of log data from a variety of sources and transport it to various destinations. Utilizing a flexible and modular architecture, Fluentd allows users to easily add new input and output plugins to integrate with a wide range of systems and applications. It supports a wide range of data sources and destinations, including databases, message queues, and data stores.&lt;/p&gt;

&lt;p&gt;Fluentd has a built-in buffering mechanism that enables it to handle temporary failures in the output destination, ensuring that data is not lost. Users can filter, buffer and format log data using the built-in filters and parsers before sending it to the output destinations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyuzf8li7h21di8zgawi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyuzf8li7h21di8zgawi.png" alt="Logs Overview" width="800" height="501"&gt;&lt;/a&gt; &lt;em&gt;Logs overview in FluentD&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some key features of FluentD are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection and transport&lt;/li&gt;
&lt;li&gt;Flexible and modular architecture&lt;/li&gt;
&lt;li&gt;Input and output plugins&lt;/li&gt;
&lt;li&gt;Variety of data sources and destinations&lt;/li&gt;
&lt;li&gt;Built-in security features&lt;/li&gt;
&lt;li&gt;Filtering, buffering, and formatting of log data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Syslog-ng
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.syslog-ng.com/" rel="noopener noreferrer"&gt;Syslog-ng&lt;/a&gt; is an open-source log management tool designed for the collection, parsing, and transportation of log data from various sources to a wide range of destinations. Known for its flexibility and wide range of features and capabilities, such as filtering, parsing, rewriting, and alerting, Syslog-ng is a widely used tool in Linux and Unix-based systems for log management.&lt;/p&gt;

&lt;p&gt;Syslog-ng is capable of collecting log data from a diverse array of sources, including Syslog, GELF, log files, and Windows Event Log. It can parse, filter, and rewrite log messages before forwarding them to other systems, such as databases, message queues, and data stores. &lt;/p&gt;

&lt;p&gt;The tool offers a large number of built-in destination and source drivers for popular data destinations, including Elasticsearch, Apache Kafka, and more, allowing for easy integration with other systems. Additionally, Syslog-ng includes a built-in buffering mechanism that enables it to handle temporary failures in the output destination and ensures that data is not lost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdxv759s75l0uiant96d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdxv759s75l0uiant96d.png" alt="syslog ng" width="758" height="561"&gt;&lt;/a&gt; &lt;em&gt;Collecting and viewing log files in Syslog ng&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some key features of Syslog-ng are;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection and transport&lt;/li&gt;
&lt;li&gt;Flexible filtering and parsing capabilities&lt;/li&gt;
&lt;li&gt;Built-in source and destination drivers&lt;/li&gt;
&lt;li&gt;A large number of input and output plugins&lt;/li&gt;
&lt;li&gt;Built-in buffering mechanism&lt;/li&gt;
&lt;li&gt;Support for various log formats and protocols.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Logwatch
&lt;/h2&gt;

&lt;p&gt;Logwatch is an open-source log analysis tool designed to automatically parse and analyze log files from various services and applications running on Linux or Unix-based systems. It presents a summary of the log data, including system activity, security events, and potential issues in a detailed, easy-to-read format, making it simple to identify and troubleshoot problems.&lt;/p&gt;

&lt;p&gt;Logwatch utilizes a series of customizable filter scripts, written in Perl, to parse log data from various services and applications, such as Apache, SSH, and Syslog. These scripts can be modified to meet the specific needs of an organization. Additionally, Logwatch offers various options for controlling the output, including the ability to filter out specific log entries, adjust the level of detail, and send the output to a specific email address or file. &lt;/p&gt;

&lt;p&gt;Logwatch is typically run on a daily basis and can be scheduled to run automatically using cron or another scheduling tool. It also offers a command-line interface, which allows users to run Logwatch and view the output directly on the command line.&lt;/p&gt;

&lt;p&gt;Some key features of Logwatch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data analysis&lt;/li&gt;
&lt;li&gt;Customizable filter scripts&lt;/li&gt;
&lt;li&gt;Detailed and easy-to-read output&lt;/li&gt;
&lt;li&gt;Output filtering and control&lt;/li&gt;
&lt;li&gt;Email and file output&lt;/li&gt;
&lt;li&gt;Scheduled and command-line execution&lt;/li&gt;
&lt;li&gt;Summary of system activity, security events, and potential problems&lt;/li&gt;
&lt;li&gt;Ability to filter out specific log entries&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Apache Flume
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://flume.apache.org/" rel="noopener noreferrer"&gt;Apache Flume&lt;/a&gt; is an open-source log management tool designed to efficiently collect, aggregate, and transport large volumes of log data from various sources to a centralized data store, such as HDFS or Hbase. It excels in handling large amounts of log data in real-time and is highly scalable, able to handle the load from multiple servers, network devices, and applications.&lt;/p&gt;

&lt;p&gt;In terms of log management, Apache Flume offers features such as data collection, transportation, aggregation, fault tolerance, and delivery guarantee. It also boasts a plugin-based architecture, allowing organizations to easily add new sources and sinks as needed, facilitating integration with other log management tools and systems, and enabling the addition of new log sources. Additionally, it is straightforward to set up and configure and provides a web-based interface for monitoring and managing log data.&lt;/p&gt;

&lt;p&gt;Some key features of Apache Flume are;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log data collection and transportation&lt;/li&gt;
&lt;li&gt;Data aggregation&lt;/li&gt;
&lt;li&gt;Centralized data storage&lt;/li&gt;
&lt;li&gt;Fault-tolerance and delivery guarantee&lt;/li&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Plugin-based architecture&lt;/li&gt;
&lt;li&gt;Web-based interface&lt;/li&gt;
&lt;li&gt;Real-time log data processing&lt;/li&gt;
&lt;li&gt;Integration with other log management tools and systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Choosing the right Log Management Tool
&lt;/h2&gt;

&lt;p&gt;When choosing a log management tool, it is important to consider factors such as data collection, ingestion, and processing capabilities. You should consider scalability, security features, integration with other tools and systems, user interface, and visualization options. Based on these factors, you can choose a log management tool that fit your use cases.&lt;/p&gt;

&lt;p&gt;If you are looking for an open source log management tool that solves most of your monitoring needs, then &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; can be a good choice. It provides logs, metrics, and traces under a single pane of glass with an intelligent correlation between the three types of telemetry signals. &lt;/p&gt;

&lt;p&gt;SigNoz is open-source and cost-effective for organizations. It is built to support OpenTelemetry natively. With the flexibility and scalability of OpenTelemetry and SigNoz, organizations can monitor and analyze large volumes of log data in real-time, making it an ideal solution for log management.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting started with SigNoz
&lt;/h2&gt;

&lt;p&gt;SigNoz can be installed on macOS or Linux computers in just three steps by using a simple install script.&lt;/p&gt;

&lt;p&gt;The install script automatically installs &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker Engine&lt;/a&gt; on Linux. However, on macOS, you must manually install Docker Engine before running the install script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone -b main &amp;lt;https://github.com/SigNoz/signoz.git&amp;gt;
cd signoz/deploy/
./install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can visit our documentation for instructions on how to install SigNoz using Docker Swarm and Helm Charts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/docs/install/docker/?utm_source=blog&amp;amp;utm_medium=opentelemetry_springboot" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnx6uyuojehnq4kplqk8.png" alt="deploy_docker_documentation" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out its GitHub repo here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/SigNoz/signoz" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl11retc5gn6d9vaov72w.png" alt="signoz_github" width="708" height="162"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Related posts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/logging-as-a-service/" rel="noopener noreferrer"&gt;Logging as a service | Log Management with Open Source Tool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-logs/" rel="noopener noreferrer"&gt;OpenTelemetry Logs - A Complete Introduction &amp;amp; Implementation | SigNoz&lt;/a&gt;&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>web3</category>
      <category>blockchain</category>
    </item>
  </channel>
</rss>
