<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Emmanuel Akanji</title>
    <description>The latest articles on Forem by Emmanuel Akanji (@mannyuncharted).</description>
    <link>https://forem.com/mannyuncharted</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mannyuncharted"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Sat, 28 Jun 2025 22:02:23 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/-e3h</link>
      <guid>https://forem.com/mannyuncharted/-e3h</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aldorax/why-we-built-flowspec-revolutionizing-our-workflow-engine-ml4" class="crayons-story__hidden-navigation-link"&gt;Why We Built FlowSpec: A Better Way to Orchestrate AI Workflows&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/aldorax" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1358154%2F4e9b492d-62e5-4319-a190-0b8e96e59ecd.jpeg" alt="aldorax profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/aldorax" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Aldorax
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Aldorax
                
              
              &lt;div id="story-author-preview-content-2634285" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/aldorax" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1358154%2F4e9b492d-62e5-4319-a190-0b8e96e59ecd.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Aldorax&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/aldorax/why-we-built-flowspec-revolutionizing-our-workflow-engine-ml4" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 28 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aldorax/why-we-built-flowspec-revolutionizing-our-workflow-engine-ml4" id="article-link-2634285"&gt;
          Why We Built FlowSpec: A Better Way to Orchestrate AI Workflows
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/auvraai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;auvraai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/programming"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;programming&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aldorax/why-we-built-flowspec-revolutionizing-our-workflow-engine-ml4" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;3&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aldorax/why-we-built-flowspec-revolutionizing-our-workflow-engine-ml4#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>auvraai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Developing, Testing, and Deploying an ERC20 Token on Mantle Testnet using Hardhat and Ethers.js</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Mon, 12 Jun 2023 01:03:51 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/developing-testing-and-deploying-an-erc20-token-on-mantle-testnet-using-hardhat-and-ethersjs-53df</link>
      <guid>https://forem.com/mannyuncharted/developing-testing-and-deploying-an-erc20-token-on-mantle-testnet-using-hardhat-and-ethersjs-53df</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article, we will guide you through the process of creating and deploying an ERC20 token called Mantlecoin (MTNL) on the Mantle blockchain using Hardhat and Ethers.js. We will cover the necessary prerequisites, environment settings, and provide a step-by-step guide on how to create, test, and deploy your token.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mantle Blockchain: A Brief History and Overview
&lt;/h2&gt;

&lt;p&gt;Mantle is a blockchain platform designed to provide a scalable and secure infrastructure for decentralized applications (dApps). It offers a unique set of features, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;High throughput: Mantle's consensus algorithm allows for fast transaction processing and low latency, making it suitable for a wide range of applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: The platform employs advanced cryptographic techniques to ensure the security and integrity of the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Interoperability: Mantle supports cross-chain communication, enabling seamless interaction between different blockchain networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer-friendly: The platform provides a comprehensive set of tools and libraries to simplify the development and deployment of dApps.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node.js 14.8+ or higher&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;npm (version 6 or higher)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge of JavaScript&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, to interact with the Mantle testnet, it is recommended to have some BIT tokens in your wallet. You can obtain BIT tokens by following the instructions provided by the Mantle testnet faucet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the project
&lt;/h2&gt;

&lt;p&gt;First, create a new directory for your project and navigate to it in your terminal. Then, run the following command to initialize a new Hardhat project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx hardhat init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a basic Hardhat project structure with the necessary files and folders.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing dependencies
&lt;/h2&gt;

&lt;p&gt;Once you have set up your project, the next step is to install the necessary dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save-dev @nomiclabs/hardhat-ethers ethers @openzeppelin/contracts chai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a breakdown of what each dependency represents:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;@nomiclabs/hardhat-ethers: This is Hardhat's Ethers.js plugin, which provides integration between Hardhat and the Ethers.js library.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ethers: This library is an essential tool for interacting with Ethereum networks. It offers a simple and intuitive interface for working with smart contracts and handling Ethereum transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;@openzeppelin/contracts: This library provides a collection of secure and audited smart contracts, including the ERC20 implementation. Using these contracts can save you time and ensure the reliability of our Mantle token.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;chai: This package is a popular assertion library for JavaScript. It is often used in conjunction with testing frameworks like Hardhat to write assertions and make test assertions more readable and expressive.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Configuring Hardhat
&lt;/h2&gt;

&lt;p&gt;Open the hardhat.config.js file and add the following code to enable the Ethers.js plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require("@nomiclabs/hardhat-ethers");

module.exports = {
  solidity: "0.8.4",
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Writing the Mantlecoin contract
&lt;/h2&gt;

&lt;p&gt;In your project directory, create a new file called &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mantlecoin.sol&lt;br&gt;
 inside the contracts folder and add the following code:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// SPDX-License-Identifier: MIT
pragma solidity ^0.8.4;

import "@openzeppelin/contracts/token/ERC20/ERC20.sol";

contract Mantlecoin is ERC20 {
    constructor(uint256 initialSupply) ERC20("Mantlecoin", "MTNL") {
        _mint(msg.sender, initialSupply);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, we import the ERC20 contract from the OpenZeppelin Contracts library and create a new contract called Mantlecoin that inherits from ERC20.&lt;/p&gt;

&lt;p&gt;The constructor takes an initialSupply argument and calls the parent constructor with the token name "Mantlecoin" and the ticker symbol "MTNL". Finally, we mint the initial supply of tokens and assign them to the contract deployer&lt;/p&gt;

&lt;h2&gt;
  
  
  Compiling the contract
&lt;/h2&gt;

&lt;p&gt;To compile the Mantlecoin contract, run the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx hardhat compile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will compile the contract and generate the necessary artifacts in the artifacts folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing Tests for Mantlecoin
&lt;/h2&gt;

&lt;p&gt;To ensure that our token works as expected, let's write some tests. Create a new file called &lt;code&gt;Mantlecoin.test.js&lt;/code&gt; inside the test folder and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { expect } = require("chai");
const { ethers } = require("hardhat");

describe("Mantlecoin", function () {
  let Mantlecoin, mantlecoin, owner, addr1, addr2;

  beforeEach(async () =&amp;gt; {
    Mantlecoin = await ethers.getContractFactory("Mantlecoin");
    [owner, addr1, addr2] = await ethers.getSigners();
    mantlecoin = await Mantlecoin.deploy(1000000);
  });

  it("Should mint the initial supply to the owner", async () =&amp;gt; {
    const ownerBalance = await mantlecoin.balanceOf(owner.address);
    expect(ownerBalance).to.equal(1000000);
  });

  it("Should transfer tokens between accounts", async () =&amp;gt; {
    await mantlecoin.transfer(addr1.address, 500);
    const addr1Balance = await mantlecoin.balanceOf(addr1.address);
    expect(addr1Balance).to.equal(500);
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this test file, we import the required libraries and set up a beforeEach hook to deploy a new instance of the Mantlecoin contract before each test. We then write two tests: one to check that the initial supply is minted to the owner, and another to test token transfers between accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the tests
&lt;/h2&gt;

&lt;p&gt;To run the tests, execute the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx hardhat test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will run the tests and display the results in the terminal.&lt;/p&gt;

&lt;p&gt;Deploying the Contract to the Mantle Testnet&lt;/p&gt;

&lt;p&gt;To deploy our contract to the Mantle testnet, we need to update our &lt;code&gt;hardhat.config.js&lt;/code&gt; file with the appropriate network configuration. Add the following code to your hardhat.config.js file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require("@nomiclabs/hardhat-ethers");

module.exports = {
  solidity: "0.8.4",
  networks: {
    localhost: {
      url: "http://127.0.0.1:8545",
    },
    mantleTestnet: {
      url: "https://rpc.testnet.mantle.xyz/",
      chainId: 5001,
      accounts: [process.env.PRIVATE_KEY],
    },
  },
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;process.env.PRIVATE_KEY&lt;/code&gt; with your private key or use an environment variable to store it securely.&lt;/p&gt;

&lt;p&gt;Now, create a new file called &lt;code&gt;deploy.js&lt;/code&gt; in the scripts folder and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function main() {
  const [deployer] = await ethers.getSigners();

  console.log("Deploying contracts with the account:", deployer.address);

  const Mantlecoin = await ethers.getContractFactory("Mantlecoin");
  const mantlecoin = await Mantlecoin.deploy(1000000);

  console.log("Mantlecoin address:", mantlecoin.address);
}

main()
  .then(() =&amp;gt; process.exit(0))
  .catch((error) =&amp;gt; {
    console.error(error);
    process.exit(1);
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script deploys the Mantlecoin contract with an initial supply of 1,000,000 tokens.&lt;/p&gt;

&lt;p&gt;To deploy the contract, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx hardhat run --network mantle scripts/deploy.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will deploy the contract to the Mantle network and display the contract address in the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we have provided a detailed, step-by-step guide to creating and deploying an ERC20 token called Mantlecoin (MTNL) using Hardhat and Ethers.js. By following this guide, you should now have a solid understanding of how to create, test, and deploy your own ERC20 tokens.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Continuous Integration Pipeline using Jenkins</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Sun, 30 Oct 2022 13:33:51 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/continuous-integration-pipeline-using-jenkins-24kp</link>
      <guid>https://forem.com/mannyuncharted/continuous-integration-pipeline-using-jenkins-24kp</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Beginning.&lt;/li&gt;
&lt;li&gt;What is Jenkins?&lt;/li&gt;
&lt;li&gt;Why Jenkins?&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Installing and configuring Jenkins Server&lt;/li&gt;
&lt;li&gt;Configure Jenkins to retrieve source code from GitHub via Webhooks.&lt;/li&gt;
&lt;li&gt;Set up Jenkins to use ssh to copy files to the NFS server.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;The previous projects enable us to add new web servers to our project and set up a load balancer to distribute traffic between them. However, we must still manually copy the files to the web servers. This project would allow us to automate the process of configuring multiple web servers.&lt;/p&gt;

&lt;p&gt;Agility and the speed with which software solutions are delivered are critical factors in the world of software development. To accomplish this, we must automate as much as possible, as this will ensure quick and repeatable deployments. In this project, we will begin by automating a portion of our routine tasks using Jenkins, a free and open-source tool.&lt;br&gt;
This procedure revolves heavily around the concept of b&amp;gt;Continuous Integration (CI)/b&amp;gt;.&lt;br&gt;
CI is a software development practice in which developers integrate code into a shared repository on a regular basis, preferably several times per day. Each integration can then be validated using an automated build and automated tests. CD is a software release process that employs automated testing to determine whether changes to a codebase are correct and stable enough for immediate autonomous deployment to a production environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is Jenkins?
&lt;/h4&gt;

&lt;p&gt;Jenkins is a self-contained, open-source automation server that can be used to automate a wide range of tasks related to software development, testing, and delivery or deployment. Jenkins is a Java-based open-source tool. Jenkins is a continuous integration tool. It is used to continuously build and test your software projects, making it easier for developers to incorporate changes to the project and for users to obtain a fresh build. It also enables you to continuously deliver your software by integrating with a wide range of testing and deployment technologies.&lt;/p&gt;

&lt;p&gt;Jenkins can be installed using native system packages, Docker, or it can run standalone on any machine that has the Java runtime environment installed.&lt;/p&gt;

&lt;p&gt;Why Jenkins?&lt;br&gt;
Jenkins is a continuous integration tool. It makes it easier for thousands of developers all over the world to build, test, and deploy their software with confidence. Because it assists in keeping track of the version control system and initiating and monitoring a build system if there are any changes.&lt;/p&gt;

&lt;p&gt;In this project, we will use Jenkins CI to ensure that any changes made to the source code in GitHub are automatically updated on the web servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Source Code: &lt;a href="https://github.com/manny-uncharted/tooling.git" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Infrastructure: AWS.&lt;/li&gt;
&lt;li&gt;Webserver Linux: Red Hat Enterprise Linux 8.&lt;/li&gt;
&lt;li&gt;Database Server: Ubuntu 20.04 + MySQL.&lt;/li&gt;
&lt;li&gt;Storage Server: Red Hat Enterprise Linux 8 + NFS Server.&lt;/li&gt;
&lt;li&gt;Load Balancer: Ubuntu 20.04.&lt;/li&gt;
&lt;li&gt;Programming Language: PHP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To configure the above steps refer to the previous tutorial &lt;a href="https://github.com/manny-uncharted/configuring-apache-as-a-load-balancer.git" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Task:&lt;/b&gt; Enhance the architecture in the previous project by adding a Jenkins server, and configure a job to automatically deploy source code changes from Git to the NFS server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing and Configuring Jenkins Server
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Install Jenkins on the Jenkins server.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Make an AWS EC2 server based on Ubuntu Server 20.04 LTS and call it "Jenkins."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n5wcy6uwjtq47s1iaw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n5wcy6uwjtq47s1iaw8.png" alt="Jenkins Server" width="775" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install JDK(Java Development Kit) on the Jenkins server. Since Jenkins is a Java-based application.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install default-jdk-headless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F647gna1yazgkl22ax3ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F647gna1yazgkl22ax3ty.png" alt="Jenkins Server" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up Jenkins
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ &amp;gt; \
    /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt-get install jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnqhrxjtg60kxzwjn3ga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnqhrxjtg60kxzwjn3ga.png" alt="Jenkins Server" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have to make sure that Jenkins is running.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpogy1fhr8uzytfgdlae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpogy1fhr8uzytfgdlae.png" alt="Jenkins Server" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default Jenkins server uses TCP port 8080 - open it by creating a new inbound rule in your EC2 security group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9ih2nrgcywzzz6e1y41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9ih2nrgcywzzz6e1y41.png" alt="Jenkins Server" width="800" height="26"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform initial Jenkins setup. From your browser access.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;Jenkins-Server-Public-IP-Address&amp;gt;:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ihdxqwizgtb3lf640nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ihdxqwizgtb3lf640nf.png" alt="Jenkins Server" width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Note&lt;/b&gt;:You will be prompted to provide a default admin password.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get retrieve the default admin password, run the following command.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f486mpfsbvjl2qldjud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f486mpfsbvjl2qldjud.png" alt="Jenkins Server" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then you will be asked which plugins to install - choose suggested plugins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qba19gpnocav3v609at.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qba19gpnocav3v609at.png" alt="Jenkins Server" width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once plugin installation is done - create an admin user and you will get your Jenkin server address.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fmanny-uncharted%2Fproject-9-Continous-Integration-Pipeline-For-Tooling-Website%2Fraw%2Fmain%2Fimg%2Fjenkins-admin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fmanny-uncharted%2Fproject-9-Continous-Integration-Pipeline-For-Tooling-Website%2Fraw%2Fmain%2Fimg%2Fjenkins-admin.png" alt="Jenkins Server" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure Jenkins to retrieve source code from GitHub via Webhooks.
&lt;/h4&gt;

&lt;p&gt;In this section, we will configure Jenkins to automatically retrieve source code from GitHub whenever there is a change in the source code. Webhooks would be used to accomplish this.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable webhooks in your GitHub repository settings. Go to your GitHub repository and select Settings &amp;gt; Webhooks &amp;gt; Add webhook.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69rsinhj7qmacyc3cjxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69rsinhj7qmacyc3cjxw.png" alt="Jenkins Server" width="760" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Jenkins web console, click "New Item," and create a Freestyle project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf7inle23snio0gy4n10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf7inle23snio0gy4n10.png" alt="Jenkins Server" width="800" height="893"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To connect your GitHub repository to Jenkins, simply copy the URL from your GitHub repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj06gqut9w7js48uw0mdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj06gqut9w7js48uw0mdd.png" alt="Jenkins Server" width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your Jenkins project's configurations, select Git repository, enter the URL of your GitHub repository, and click "Add" to add credentials so Jenkins can access your GitHub repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuqrp632840tnbmlulsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuqrp632840tnbmlulsj.png" alt="Jenkins Server" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save the configurations and run the build. When you click "Build Now," you'll see that the build was successful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqldmzhqzuo5d2onz7jj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqldmzhqzuo5d2onz7jj7.png" alt="Jenkins Server" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can open the build and see if it has run successfully in "Console Output."&lt;/p&gt;

&lt;p&gt;If so, congratulations! You have just completed your first Jenkins build!&lt;/p&gt;

&lt;p&gt;Please keep in mind that this build produces nothing and only runs when we manually trigger it. Let us fix it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In your Jenkins project, go to "Configure" your job/project and add these two configurations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build Triggers &amp;gt; Build when a change is pushed to GitHub, triggering the job via the GitHub webhook:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp21wm8b7v5di4dn4wqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp21wm8b7v5di4dn4wqy.png" alt="Jenkins Server" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure "Post-build Actions" to archive all files - files generated by a build are referred to as "artifacts."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6rtg5rpbel393wrvj2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6rtg5rpbel393wrvj2f.png" alt="Jenkins Server" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Now, go ahead and make some changes to any file in your GitHub repository (for example, the README.MD file) and push the changes to the master branch.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuh3pccpx1j7qzi8dm54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuh3pccpx1j7qzi8dm54.png" alt="Jenkins Server" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We should see that a new build was launched automatically (via webhook) and that its results - artifacts - were saved on the Jenkins server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs12rsq6ge75jbf9qctdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs12rsq6ge75jbf9qctdy.png" alt="Jenkins Server" width="800" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: You have now configured an automated Jenkins job to receive files from GitHub via webhook trigger (this method is considered 'push' because the changes are 'pushed' and file transfer is initiated by GitHub). Other methods include: triggering one job (downstream) from another (upstream), polling GitHub on a regular basis, and so on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default, the artifacts are stored on Jenkins server locally
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /var/lib/jenkins/jobs/tooling_github/builds/&amp;lt;build_number&amp;gt;/archive/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjswsiu8xybyo4bgvvsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjswsiu8xybyo4bgvvsb.png" alt="Jenkins Server" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Set up Jenkins to use ssh to copy files to the NFS server.
&lt;/h4&gt;

&lt;p&gt;Now that we have our artifacts stored locally on the Jenkins server, we must copy them to the NFS server to the /mnt/apps directory.&lt;br&gt;
Because Jenkins is highly extendable and can be configured to do almost anything, we will need a plugin called "Publish over SSH" to copy files to the NFS server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the "Publish over SSH" plugin. Go to the Jenkins web console and select "Manage Jenkins" &amp;gt; "Manage Plugins" &amp;gt; "Available" &amp;gt; "Publish over SSH" &amp;gt; "Install without restart."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4uaz62h3kr5klqeaooy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4uaz62h3kr5klqeaooy.png" alt="Jenkins Server" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up the job/project to copy artifacts to the NFS server. Select "Manage Jenkins" from the main dashboard and then "Configure System." Scroll down to the Publish over SSH plugin configuration section and configure it to connect to your NFS server. The configuration is simple; you simply need to provide the following information:

&lt;ul&gt;
&lt;li&gt;Provide a private key (the contents of the.pem file that you use to connect to the NFS server via SSH/Putty).
arbitrary name&lt;/li&gt;
&lt;li&gt;Hostname - This can be your NFS server's private IP address.&lt;/li&gt;
&lt;li&gt;ec2-user is the username (since NFS server is based on EC2 with RHEL 8)&lt;/li&gt;
&lt;li&gt;Remote directory - /mnt/apps because our Web Servers use it as a mounting point to retrieve files from the NFS server&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Then save the configurations.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkmdmbu7r72g1lk2hzxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkmdmbu7r72g1lk2hzxh.png" alt="Jenkins Server" width="800" height="865"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Test the configuration and ensure that the connection returns Success. Remember that TCP port 22 on the NFS server must be open in order to receive SSH connections.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, open the Jenkins project configuration page and add another "Post-build Actions" to copy the artifacts to the NFS server. You have chosen the option "Send build artifacts over SSH."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpern2ltafg5lk6ydbg17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpern2ltafg5lk6ydbg17.png" alt="Jenkins Server" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now configure it to send all files generated by the build to our previously defined remote directory /mnt/apps. In our case, we want to copy all files and directories, so we use ** and then save the configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to apply a specific pattern to determine which files to send – &lt;a href="http://ant.apache.org/manual/dirtasks.html#patterns" rel="noopener noreferrer"&gt;use this syntax&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgyb585w2bowdb6gvj7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgyb585w2bowdb6gvj7l.png" alt="Jenkins Server" width="800" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, go ahead and make some changes to any file in your GitHub repository (for example, the README.MD file) and push the changes to the master branch. The build will be triggered by the webhook, and the artifacts will be copied to the NFS server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyhrqyq747baprez607w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyhrqyq747baprez607w.png" alt="Jenkins Server" width="718" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To ensure that the files in /mnt/apps have been updated, connect via SSH/Putty to your NFS server and check the README.MD file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /mnt/apps/README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41zu0en2q3j8isk7gbi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41zu0en2q3j8isk7gbi1.png" alt="Jenkins Server" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have just completed our first Continuous Integration solution using Jenkins CI.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>showdev</category>
    </item>
    <item>
      <title>#50DaysOfDevops Challenge Project 1: Configuring a web-app architecture with a network file server attached</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Mon, 17 Oct 2022 21:12:59 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/50daysofdevops-challenge-project-1-configuring-a-web-app-architecture-with-a-network-file-server-attached-18c0</link>
      <guid>https://forem.com/mannyuncharted/50daysofdevops-challenge-project-1-configuring-a-web-app-architecture-with-a-network-file-server-attached-18c0</guid>
      <description>&lt;p&gt;This is my first project for my #50DaysOfDevops Challenge. This article would be in series as  it covers the projects I build during my Challenge.&lt;/p&gt;

&lt;p&gt;Here i'll be explaining how I configured a simple web architecture with a database and shared network file system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction.

&lt;ul&gt;
&lt;li&gt;Prerequisites.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Step 1 - Preparing the NFS Server.&lt;/li&gt;

&lt;li&gt;Step 2 - Configure the database server.&lt;/li&gt;

&lt;li&gt;Step 3 - Configure the web servers.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;A network file server allows a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system.&lt;br&gt;
In this article I'll be implementing configuring a network file server on a simple web architecture.&lt;/p&gt;
&lt;h4&gt;
  
  
  Prerequisites.
&lt;/h4&gt;

&lt;p&gt;In this project here are some of the requirements needed to move things forward.&lt;br&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; AWS&lt;br&gt;
&lt;strong&gt;Webserver Linux:&lt;/strong&gt; Red Hat Enterprise Linux 8&lt;br&gt;
&lt;strong&gt;Database Server:&lt;/strong&gt; Ubuntu 20.04 + MySQL&lt;br&gt;
&lt;strong&gt;Storage Server:&lt;/strong&gt; Red Hat Enterprise Linux 8 + NFS Server&lt;br&gt;
&lt;strong&gt;Programming Language:&lt;/strong&gt; PHP&lt;br&gt;
&lt;strong&gt;Code Repository:&lt;/strong&gt; &lt;a href="https://github.com/manny-uncharted/tooling" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the diagram below, there is a common pattern where several stateless Web Servers would share a common database and also access the same files using Network File Sytem (NFS) as a shared file storage. Even though the NFS server might be located on a completely separate hardware – for the Web Servers it  would look like a local file system from where they can serve the same files.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.darey.io%2Fwp-content%2Fuploads%2F2021%2F07%2FTooling-Website-Infrastructure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.darey.io%2Fwp-content%2Fuploads%2F2021%2F07%2FTooling-Website-Infrastructure.png" alt="image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1 - Preparing the NFS Server
&lt;/h3&gt;

&lt;p&gt;Like the explanation above this server would server like a local file system where all the web servers would access the same files. In this step, we would prepare the Network File system server.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a new EC2 instance with RHEL Linux 8 AMI.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj80zlfai1w3nxrb2tc21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj80zlfai1w3nxrb2tc21.png" alt="image" width="800" height="25"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure LVM on the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create and attach 3 volumes to the server.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30lpui8r8rtotgjglwxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30lpui8r8rtotgjglwxv.png" alt="image" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Connect to your Linux instance and check if the volumes are attached using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe35bnt80oxdiial4c0o5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe35bnt80oxdiial4c0o5.png" alt="image" width="625" height="259"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use gdisk utility to create a single partition on each of the 3 disks.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdisk /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqffztk6digqmin00w9v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqffztk6digqmin00w9v7.png" alt="image" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then install the lvm2 package using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install lvm2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then run the following command to check for available partitions:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvmdiskscan
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9iu575t8dro181oxdfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9iu575t8dro181oxdfg.png" alt="image" width="736" height="408"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xe12thms25759sssz14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xe12thms25759sssz14.png" alt="image" width="630" height="190"&gt;&lt;/a&gt;&lt;br&gt;
Note: Unlike Ubuntu which uses apt, for redhat the package manager is yum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now after checking that there's no logical volume present, we need to create a physical volume on each of the 3 disks using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvcreate /dev/nvme1n1 
sudo pvcreate /dev/nvme2n1 
sudo pvcreate /dev/nvme3n1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43wh9kcbhnf6qw5fu55v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43wh9kcbhnf6qw5fu55v.png" alt="image" width="780" height="258"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to verify that our physical volume has been created successfully using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lejho1ma1af61rv5iq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lejho1ma1af61rv5iq6.png" alt="image" width="598" height="175"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to create a volume group using the vgcreate utility. We will use the 3 disks we created earlier to create a volume group called NFS-vg.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgcreate nfs-vg /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnp5g25txhwgygkeu0lb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnp5g25txhwgygkeu0lb3.png" alt="image" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use lvcreate utility to create 3 logical volumes. lv-opt lv-apps, and lv-logs. The lv-apps: would be used by the webservers, The lv-logs: would be used by web server logs, and the lv-opt: would be used by the Jenkins server.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvcreate -L 20G -n lv-opt nfs-vg
sudo lvcreate -L 20G -n lv-apps nfs-vg
sudo lvcreate -L 10G -n lv-logs nfs-vg
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9s6pnn7kwe6simpgjyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9s6pnn7kwe6simpgjyt.png" alt="image" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to verify that our logical volumes have been created successfully using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9akm4ao16y5lt1noyk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9akm4ao16y5lt1noyk5.png" alt="image" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify the entire setup&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgdisplay -v #view complete setup - VG, PV, and LV
sudo lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje8g34mxxudzegs0de4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje8g34mxxudzegs0de4x.png" alt="image" width="800" height="817"&gt;&lt;/a&gt;&lt;br&gt;
![image]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use mkfs.xfs to format the logical volumes with xfs filesystem.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs -t xfs /dev/nfs-vg/lv-opt
sudo mkfs -t xfs /dev/nfs-vg/lv-apps
sudo mkfs -t xfs /dev/nfs-vg/lv-logs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxqi9j402mx77uskywz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxqi9j402mx77uskywz8.png" alt="image" width="800" height="894"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a directory for each of the logical volumes.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /mnt/apps
sudo mkdir -p /mnt/logs
sudo mkdir -p /mnt/opt
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxea3qeipugf30cj3uijt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxea3qeipugf30cj3uijt.png" alt="image" width="723" height="124"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount the logical volumes to the directories we created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount /dev/nfs-vg/lv-apps /mnt/apps
sudo mount /dev/nfs-vg/lv-logs /mnt/logs
sudo mount /dev/nfs-vg/lv-opt /mnt/opt
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzvmt6l4057lzbkt0asr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzvmt6l4057lzbkt0asr.png" alt="image" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that the logical volumes have been mounted successfully.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87c4kffazlkbazjiug8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87c4kffazlkbazjiug8r.png" alt="image" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to make the mount persistent. To do that, we need to edit the /etc/fstab file and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo blkid
sudo nano /etc/fstab
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddh841gl2j9ys3r9b9yw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddh841gl2j9ys3r9b9yw.png" alt="image" width="800" height="211"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld14zmnmnicrzhgk7sbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld14zmnmnicrzhgk7sbl.png" alt="image" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to test the configurations and reload the daemon.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount -a
sudo systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzym2iwz49pyjp086ke7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzym2iwz49pyjp086ke7.png" alt="image" width="781" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is about time for us to install the NFS server. To do that, we need to install the nfs-utils package using the following command:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum -y update
sudo yum install -y nfs-utils
sudo systemctl start nfs-server.service
sudo systemctl enable nfs-server.service
sudo systemctl status nfs-server.service
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9c782g7c7bvwc653kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9c782g7c7bvwc653kt.png" alt="image" width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to set up permissions that will allow our web servers to read, write and execute files on NFS:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R nobody: /mnt/apps
sudo chown -R nobody: /mnt/logs
sudo chown -R nobody: /mnt/opt
sudo chmod -R 777 /mnt/apps
sudo chmod -R 777 /mnt/logs
sudo chmod -R 777 /mnt/opt
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru1zms7mzpqqh4rwbhfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru1zms7mzpqqh4rwbhfj.png" alt="image" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to edit the /etc/exports file and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/exports
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;add the following:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/mnt/apps &amp;lt;Subnet-CIDR&amp;gt;(rw,sync,no_all_squash,no_root_squash)
/mnt/logs &amp;lt;Subnet-CIDR&amp;gt;(rw,sync,no_all_squash,no_root_squash)
/mnt/opt &amp;lt;Subnet-CIDR&amp;gt;(rw,sync,no_all_squash,no_root_squash)
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then run the command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo exportfs -arv
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z4spduac39tju5vaza6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z4spduac39tju5vaza6.png" alt="image" width="800" height="117"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoapuozjga3mahm25f3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoapuozjga3mahm25f3q.png" alt="image" width="658" height="126"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to check the port being used by NFS and open it using Security Groups(add a new inbound Rule):&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rpcinfo -p | grep nfs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Note: For the NFS server to be accessible from your client, you must also open the following ports: TCP 111, UDP 111, and UDP 2049.&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: - Configure the database server
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a new instance of ubuntu and ssh into it.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i "key.pem" ubuntu@&amp;lt;IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxty0brj6krzisg9uz39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxty0brj6krzisg9uz39.png" alt="image" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the MySQL server using the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mysql-server
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2x1983lpzs9qgnk1zg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2x1983lpzs9qgnk1zg3.png" alt="image" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a database name called tooling&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql
CREATE DATABASE tooling;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5as7ypcbn8dbdxtlyny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5as7ypcbn8dbdxtlyny.png" alt="image" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a database user and name it webaccess&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER 'webaccess'@'%' IDENTIFIED BY 'password';
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8oe1unqfyu5ow2ewhpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8oe1unqfyu5ow2ewhpj.png" alt="image" width="800" height="97"&gt;&lt;/a&gt;&lt;br&gt;
Note: The '%' should be replaced with the address of the subnet CIDR of your webservers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Grant the webaccess user all privileges on the tooling database.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON tooling.* TO 'webaccess'@'%';
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v58pwh3izd74oompyav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v58pwh3izd74oompyav.png" alt="image" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to flush all privileges.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FLUSH PRIVILEGES;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bzhbhpiptt3th9g2mag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bzhbhpiptt3th9g2mag.png" alt="image" width="550" height="97"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now let's show our databases and users.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHOW DATABASES;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F897r4ztidzxqnehhsk1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F897r4ztidzxqnehhsk1j.png" alt="image" width="406" height="369"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now let's navigate to the tooling database and show tables.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE tooling;
SHOW TABLES;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21t5jfsgtw6krj0urqaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21t5jfsgtw6krj0urqaj.png" alt="image" width="318" height="139"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: - Configure the web servers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a new instance of RedHat and ssh into it.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i "key.pem" redhat@&amp;lt;IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install NFS client using the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y nfs-utils nfs4-acl-tools
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyktmoftqiluaue5ttl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyktmoftqiluaue5ttl2.png" alt="image" width="800" height="749"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a directory called /var/www/ and target the NFS server's export for apps&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /var/www
sudo mount -t nfs -o rw,nosuid &amp;lt;NFS-Server-IP&amp;gt;:/mnt/apps /var/www
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp82jkts6r1xkqq2oa12v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp82jkts6r1xkqq2oa12v.png" alt="image" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that NFS was mounted successfully.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkma576m4lzd7ui9hkrzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkma576m4lzd7ui9hkrzv.png" alt="image" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure that the changes will persist on the web server after reboot.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/fstab
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;add the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;NFS-Server-IP&amp;gt;:/mnt/apps /var/www nfs defaults 0 0
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwim6hry3z73dysj1ftz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwim6hry3z73dysj1ftz4.png" alt="image" width="678" height="46"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Remi's repository, Apache and PHP&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install httpd -y
sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

sudo dnf install dnf-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm

sudo dnf module reset php

sudo dnf module enable php:remi-7.4

sudo dnf install php php-opcache php-gd php-curl php-mysqlnd

sudo systemctl start php-fpm

sudo systemctl enable php-fpm

setsebool -P httpd_execmem 1
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvlkm2afk0th41ogghmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvlkm2afk0th41ogghmv.png" alt="image" width="800" height="741"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gae6u92y32uakjp45m3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gae6u92y32uakjp45m3.png" alt="image" width="800" height="222"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjids6v5iooecrvatkbaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjids6v5iooecrvatkbaa.png" alt="image" width="800" height="226"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefxzg38nxrrz2999i02s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefxzg38nxrrz2999i02s.png" alt="image" width="800" height="641"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferg8remm33zg02gszuhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferg8remm33zg02gszuhf.png" alt="image" width="800" height="575"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7zij9jmawo67caf6lj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7zij9jmawo67caf6lj8.png" alt="image" width="800" height="786"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7whotrkzj5422jby3z8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7whotrkzj5422jby3z8k.png" alt="image" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat the previous step for another 2 Web Servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To verify that Apache files and directories are available on the Web Server in /var/www and also on the NFS server in /mnt/apps. If you see the same files – it means NFS is mounted correctly. You can try to create a new file touch test.txt from one server and check if the same file is accessible from other Web Servers.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo touch test.txt
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwtv9lpizzbfjp00yh92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwtv9lpizzbfjp00yh92.png" alt="image" width="786" height="54"&gt;&lt;/a&gt;&lt;br&gt;
and on the other web servers:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdeqmfshwotu9yfv95ejv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdeqmfshwotu9yfv95ejv.png" alt="image" width="643" height="75"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to locate the log folder for Apache on the web server and mount it to the NFS server's export for logs and make sure that the changes will persist on the web server after reboot on all the web servers.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /var/log/httpd
sudo mount -t nfs -o rw, nosuid &amp;lt;NFS-Server-IP&amp;gt;:/mnt/logs /var/log/httpd
&lt;/code&gt;&lt;/pre&gt;




&lt;p&gt;and then add the following to /etc/fstab:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/fstab
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;add the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;NFS-Server-IP&amp;gt;:/mnt/logs /var/log/httpd nfs defaults 0 0
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclmxezow55bhcr7vttfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclmxezow55bhcr7vttfe.png" alt="image" width="800" height="62"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqnzom0ook55noark5ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqnzom0ook55noark5ul.png" alt="image" width="763" height="60"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to install git on any of the web servers.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install git
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8zmbzhhth4xnj1cgoxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8zmbzhhth4xnj1cgoxn.png" alt="image" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to clone the repository from GitHub to the web server.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/manny-uncharted/tooling.git
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcaueb6bxctpduc07oz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcaueb6bxctpduc07oz9.png" alt="image" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And now we need to copy the files from the html folder in the repository to the /var/www/html directory.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd tooling
sudo cp -r html/* /var/www/html/
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;and to verify that the files were copied successfully:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ls /var/www/html/
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbvwhfq2tkunr1shvd9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbvwhfq2tkunr1shvd9o.png" alt="image" width="800" height="37"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhz84qzssk5vupkz740x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhz84qzssk5vupkz740x.png" alt="image" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: If you encounter 403 Error – check permissions to your /var/www/html folder and also disable SELinux sudo setenforce 0. To make this change permanent – open following config file sudo nano /etc/sysconfig/selinux and set SELINUX=disabledthen restrt httpd.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Now we need to update the website’s configuration to connect to the database (in /var/www/html/functions.php file).&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /var/www/html/functions.php
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6kgj058fvftud2m9loi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6kgj058fvftud2m9loi.png" alt="image" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then we apply tooling-db.sql script to your database using this command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install mysql -y
mysql -h &amp;lt;private-ip-of-db&amp;gt; -u webaccess -p tooling &amp;lt; tooling-db.sql
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Note: Ensure that you edit the /etc/mysql/mysql.conf.d/mysqld.cnf file to allow remote access to the database.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F143ljr6qgw0akb628v76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F143ljr6qgw0akb628v76.png" alt="image" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now we need to create in MySQL a new admin user with username: myuser and password: password:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO users (‘id’, ‘username’, ‘password’, ’email’, ‘user_type’, ‘status’) VALUES
(2, ‘myuser’, ‘5f4dcc3b5aa765d61d8327deb882cf99’, ‘user@mail.com’, ‘admin’, 1);
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzv2o5rgu9aflwpave9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzv2o5rgu9aflwpave9e.png" alt="image" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now we can then open the website in the browser and login with the new user.&lt;br&gt;
Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqvpbcvh8zqzszgqrihs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqvpbcvh8zqzszgqrihs.png" alt="image" width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Congratulations now you've successfully been able to configure a network file server. In my next project challenge I'll be configuring a load balancer using Apache.&lt;/p&gt;

&lt;p&gt;Watch out for it!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Implementation of a basic web solution using WordPress</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Tue, 09 Aug 2022 14:54:00 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/linux-storage-infrastructure-implementation-of-a-basic-web-solution-using-wordpress-53k</link>
      <guid>https://forem.com/mannyuncharted/linux-storage-infrastructure-implementation-of-a-basic-web-solution-using-wordpress-53k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dl9w1s0a1znsdsla0ga.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dl9w1s0a1znsdsla0ga.jpg" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;
WordPress is a content management system (CMS) that allows you to host and build websites. WordPress contains plugin architecture and a template system, so you can customize any website to fit your business, blog, portfolio, or online store. The focus of this tutorial is not on how to build websites with wordpress.&lt;/p&gt;

&lt;p&gt;In this tutorial I would be showing you how to prepare storage infrastructure on two Linux servers and implement a basic web solution using WordPress. WordPress is a free and open-source content management system written in PHP and paired with MySQL or MariaDB as its backend Relational Database Management System (RDBMS). &lt;/p&gt;

&lt;p&gt;This project consists of two parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Configure storage subsystem for Web and Database servers based on Linux OS. The focus of this part is to give you practical experience of working with disks, partitions and volumes in Linux.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install WordPress and connect it to a remote MySQL database server. This part of the project will solidify your skills of deploying Web and DB tiers of Web solution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a DevOps engineer, a deep understanding of core components of web solutions and the ability to troubleshoot them will play essential role in your further progress and development.&lt;/p&gt;
&lt;h2&gt;
  
  
  Three-tier Architecture
&lt;/h2&gt;

&lt;p&gt;Generally, web, or mobile solutions are implemented based on what is called the Three-tier Architecture.&lt;/p&gt;

&lt;p&gt;Three-tier Architecture is a client-server software architecture pattern that comprise of 3 separate layers. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;b&gt;Presentation Layer (PL)&lt;/b&gt;: This is the user interface such as the client server or browser on your laptop.&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Business Layer (BL)&lt;/b&gt;: This is the backend program that implements business logic. Application or Webserver&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Data Access or Management Layer (DAL)&lt;/b&gt;: This is the layer for computer data storage and data access. Database Server or File System Server such as FTP server, or NFS Server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this project, you will have the hands-on experience that showcases Three-tier Architecture while also ensuring that the disks used to store files on the Linux servers are adequately partitioned and managed through programs such as gdisk and LVM respectively.&lt;/p&gt;

&lt;p&gt;Requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;b&gt;Your 3-Tier Setup&lt;/b&gt;

&lt;ul&gt;
&lt;li&gt;A Laptop or PC to serve as a client&lt;/li&gt;
&lt;li&gt;An EC2 Linux Server as a web server (This is where you will install WordPress)&lt;/li&gt;
&lt;li&gt;An EC2 Linux server as a database (DB) server&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Note:&lt;/b&gt; We are using RedHat OS for this project, you should be able to spin up an EC2 instance on your own. Also when connecting to RedHat you will need to use ec2-user user. Connection string will look like ec2-user@public-ip-address.&lt;/p&gt;


&lt;h2&gt;
  
  
  Creating and mounting Volumes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create and attach a new volume to your Linux server.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7lo29olbotu9a3bdh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7lo29olbotu9a3bdh6.png" width="800" height="36"&gt;&lt;/a&gt;&lt;br&gt;
Note: Ensure that the availability zone of your volume must be the same as your Linux server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect to your linux server and check if the volume is attached using this command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5y8f3lp8222ixlmr30d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5y8f3lp8222ixlmr30d.png" width="796" height="87"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use df -h command to see all mounts and free space on your server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk35qdwdx2fif7t9qzg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk35qdwdx2fif7t9qzg7.png" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use gdisk utility to create a single partition on each of the 3 disks&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdisk /dev/xvdf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then use the w command and enter "y" to create a single partition.&lt;/p&gt;

&lt;p&gt;Also repeat the same for the other two disks&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdisk /dev/xvdg
sudo gdisk /dev/xvdh
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskdp6e7pczty1lp21593.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskdp6e7pczty1lp21593.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use lsblk utility to view the newly configured partition on each of the 3 disks.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l5k9xyjh469yue2mq44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l5k9xyjh469yue2mq44.png" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install lvm2 package using sudo yum install lvm2. Run sudo lvmdiskscan command to check for available partitions.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install lvm2
sudo lvmdiskscan
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vo612x3np83l3f1bffi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vo612x3np83l3f1bffi.png" width="800" height="133"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv0nopykh5i7fys7va6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv0nopykh5i7fys7va6x.png" width="800" height="257"&gt;&lt;/a&gt;&lt;br&gt;
Note: Unlike ubuntu that uses apt, for redhat the package manager is yum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use pvcreate utility to mark each of 3 disks as physical volumes (PVs) to be used by LVM&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvcreate /dev/xvdf
sudo pvcreate /dev/xvdg
sudo pvcreate /dev/xvdh
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlyuuvob6a3pgum9qpi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlyuuvob6a3pgum9qpi8.png" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that your Physical volume has been created successfully by running sudo pvs&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1l2e0lf4ipnnjpv3qms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1l2e0lf4ipnnjpv3qms.png" width="799" height="250"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use vgcreate utility to add all 3 PVs to a volume group (VG). Name the VG webdata-vg&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgcreate webdata-vg /dev/xvdf2 /dev/xvdg2 /dev/xvdh2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkka1dldkybhuw6qwtvzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkka1dldkybhuw6qwtvzt.png" width="766" height="55"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use lvcreate utility to create 2 logical volumes. apps-lv (Use half of the PV size), and logs-lv Use the remaining space of the PV size. NOTE: apps-lv will be used to store data for the Website while, logs-lv will be used to store data for logs.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvcreate -n apps-lv -L 14G webdata-vg
sudo lvcreate -n logs-lv -L 14G webdata-vg
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ztdw4baagt7bl165tt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ztdw4baagt7bl165tt5.png" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that your Logical Volume has been created successfully by running sudo lvs&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8to5cisb4jqvluk8cmmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8to5cisb4jqvluk8cmmh.png" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify the entire setup&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgdisplay -v #view complete setup - VG, PV, and LV
sudo lsblk 
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps2bajzdwahvnhnvpt53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps2bajzdwahvnhnvpt53.png" width="800" height="578"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri78rtkcj9mkpafi2ynl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri78rtkcj9mkpafi2ynl.png" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use mkfs.ext4 to format the logical volumes with ext4 filesystem&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs.ext4 /dev/webdata-vg/apps-lv
sudo mkfs.ext4 /dev/webdata-vg/logs-lv
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzf0xxhl8txl346tfvbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzf0xxhl8txl346tfvbg.png" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Creating a directory structure.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create /var/www/html directory to store website files&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /var/www/html
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F503tr878172meyd92xpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F503tr878172meyd92xpz.png" width="795" height="49"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create /home/recovery/logs to store backup of log data&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /home/recovery/logs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhv502x9ipnicgcznkel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhv502x9ipnicgcznkel.png" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount /var/www/html on apps-lv logical volume&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount /dev/webdata-vg/apps-lv /var/www/html/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnyvx1e3but646ewrpya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnyvx1e3but646ewrpya.png" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use rsync utility to backup all the files in the log directory /var/log into /home/recovery/logs (This is required before mounting the file system)&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rsync -av /var/log/. /home/recovery/logs/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01memiov0744717ga4uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01memiov0744717ga4uj.png" width="800" height="823"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount /var/log on logs-lv logical volume. (Note that all the existing data on /var/log will be deleted. That is why step of creating /var/www/html directory to store website files)&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount /dev/webdata-vg/logs-lv /var/log
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8btn8zyup6u40qdkqi90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8btn8zyup6u40qdkqi90.png" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restore log files back into /var/log directory&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rsync -av /home/recovery/logs/. /var/log
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fwi72fmoexscqxz2op5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fwi72fmoexscqxz2op5.png" width="800" height="850"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  UPDATING THE &lt;code&gt;/ETC/FSTAB&lt;/code&gt; FILE
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Update /etc/fstab file so that the mount configuration will persist after restart of the server.&lt;br&gt;
The UUID of the device will be used to update the /etc/fstab file;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo blkid
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4f2mdxrjsnmki41852o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4f2mdxrjsnmki41852o.png" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update /etc/fstab in this format using your own UUID and rememeber to remove the leading and ending quotes.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/fstab
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and add this&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UUID=&amp;lt;uuid of your webdata-vg-apps&amp;gt; /var/www/html ext4 defaults 0 0
UUID=&amp;lt;uuid of your webdata-vg-logs&amp;gt; /var/log ext4 defaults 0 0
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkittmzwxru0shmm32a00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkittmzwxru0shmm32a00.png" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test the configuration and reload the daemon&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount -a
sudo systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6o6qmjascldt4tv6g9s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6o6qmjascldt4tv6g9s7.png" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify your setup by running df -h, output must look like this:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7228xae2642q4jty10hq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7228xae2642q4jty10hq.png" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Preparing the Database Server
&lt;/h2&gt;

&lt;p&gt;Launch a second RedHat EC2 instance that will have a role – ‘DB Server’&lt;br&gt;
Repeat the same steps as for the Web Server, but instead of apps-lv create db-lv and mount it to /db directory instead of /var/www/html/.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ssh into the instance you just created&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;your_key_file&amp;gt; ec2-user@&amp;lt;public_ip_address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fz9bjq9yyr6e0a9rpw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fz9bjq9yyr6e0a9rpw6.png" width="800" height="25"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create and attach 3 Logical Volumes to the database server instance.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7lo29olbotu9a3bdh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7lo29olbotu9a3bdh6.png" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect to your linux server and check if the volume is attached using this command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rt73448nq23xm7bw5ne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rt73448nq23xm7bw5ne.png" width="592" height="253"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use df -h command to see all mounts and free space on your server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdcyt8qkasp772p7uvhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdcyt8qkasp772p7uvhm.png" width="727" height="249"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Using gdisk utility create a single partition on each of the 3 disks&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdisk /dev/xvdf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then create a new partition with the "n" command, inputting 1 (to create a single partition) and then the "w" command and enter "y" to create a single partition.&lt;/p&gt;

&lt;p&gt;Also repeat the same for the other two disks&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdisk /dev/xvdg
sudo gdisk /dev/xvdh
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4aj0ek3y447gnpma5sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4aj0ek3y447gnpma5sw.png" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use lsblk utility to view the newly configured partition on each of the 3 disks.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lsblk
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofgb7js9y0t4moubpt83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofgb7js9y0t4moubpt83.png" width="561" height="339"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install lvm2 package using sudo yum install lvm2. Run sudo lvmdiskscan command to check for available partitions.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install lvm2
sudo lvmdiskscan
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c2ywnvkarz9b3ttn0jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c2ywnvkarz9b3ttn0jc.png" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n7zntzf2m5uov9ma672.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n7zntzf2m5uov9ma672.png" width="642" height="280"&gt;&lt;/a&gt;&lt;br&gt;
Note: Unlike ubuntu that uses apt, for redhat the package manager is yum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use pvcreate utility to mark each of 3 disks as physical volumes (PVs) to be used by LVM&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvcreate /dev/xvdf2 /dev/xvdg2 /dev/xvdh2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx56aeam1rs3z1gmygd8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx56aeam1rs3z1gmygd8x.png" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that your Physical volume has been created successfully by running sudo pvs&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b9i0ivk8qz40d6xo30u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b9i0ivk8qz40d6xo30u.png" width="577" height="156"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use vgcreate utility to add all 3 PVs to a volume group (VG). Name the VG webdata-vg&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgcreate dbdata-vg /dev/xvdf2 /dev/xvdg2 /dev/xvdh2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzak72sgtfz14l9wogts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzak72sgtfz14l9wogts.png" width="800" height="41"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use lvcreate utility to create 2 logical volumes. apps-lv (Use half of the PV size), and logs-lv Use the remaining space of the PV size. NOTE: apps-lv will be used to store data for the Website while, logs-lv will be used to store data for logs.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvcreate -n db-lv -L 14G dbdata-vg
sudo lvcreate -n logs-lv -L 14G dbdata-vg
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd49n2xq28r4by81h3q94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd49n2xq28r4by81h3q94.png" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify that your Logical Volume has been created successfully by running sudo lvs&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvjxvn59libqee2lx20p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvjxvn59libqee2lx20p.png" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We need to verify all we have done so far on the database server instance so far with these commands.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgdisplay -v #view complete setup - VG, PV, and LV
sudo lsblk 
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u9g640nqcoys50px9fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u9g640nqcoys50px9fe.png" width="800" height="653"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o06d4b880ecb69wgvx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o06d4b880ecb69wgvx1.png" width="787" height="469"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use mkfs.ext4 to format the logical volumes with ext4 filesystem&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs.ext4 /dev/dbdata-vg/db-lv &amp;amp;&amp;amp; sudo mkfs.ext4 /dev/dbdata-vg/logs-lv
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9e1c0j22hbmiqrtt5dx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9e1c0j22hbmiqrtt5dx.png" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we are done configuring the database logical volumes, we would be moving on with creating the mount points for the logical volumes and the required directories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create /db directory to store website files&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /db
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgcg5f6ns2tj66ggroay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgcg5f6ns2tj66ggroay.png" width="657" height="37"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create /home/recovery/logs to store backup of log data&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /home/recovery/logs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibk23uilr1krqfc1z3h1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibk23uilr1krqfc1z3h1.png" width="800" height="34"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount /db on apps-lv logical volume&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount /dev/dbdata-vg/db-lv /db
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj65spl815hkwr5fl7r88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj65spl815hkwr5fl7r88.png" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Like we did for the webserver instance use rsync utility to backup all the files in the log directory /var/log into /home/recovery/logs (This is required before mounting the file system)&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rsync -av /var/log/. /home/recovery/logs/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q2azjmns0p8xx8mcqbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q2azjmns0p8xx8mcqbx.png" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount /var/log on logs-lv logical volume. (Note that all the existing data on /var/log will be deleted. That is why step of creating /db directory to store database files)&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount /dev/dbdata-vg/logs-lv /var/log
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmts27j1j09ohbxyfsxwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmts27j1j09ohbxyfsxwo.png" width="800" height="30"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restore log files back into /var/log directory&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rsync -av /home/recovery/logs/. /var/log
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3obf5e94mzpifxb4nk3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3obf5e94mzpifxb4nk3.png" width="800" height="709"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we need to update the /etc/fstab file to ensure that the configurations we made is persistent across reboots.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Update /etc/fstab file so that the mount configuration will persist after restart of the server.&lt;br&gt;
The UUID of the device will be used to update the /etc/fstab file;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo blkid
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfudttwdxyzjhof75uzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfudttwdxyzjhof75uzg.png" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Update /etc/fstab in this format using your own UUID and rememeber to remove the leading and ending quotes.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/fstab
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and add this&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UUID=&amp;lt;uuid of your webdata-vg-apps&amp;gt; /var/www/html ext4 defaults 0 0
UUID=&amp;lt;uuid of your webdata-vg-logs&amp;gt; /var/log ext4 defaults 0 0
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi27ffl2zom5kuxgv26mr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi27ffl2zom5kuxgv26mr.png" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test the configuration and reload the daemon&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount -a
sudo systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrytzyj0stlstosrvbdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrytzyj0stlstosrvbdn.png" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify your setup by running df -h, output must look like this:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo df -h
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7952zu17so579pl4rwpl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7952zu17so579pl4rwpl.png" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now your db server is ready to go and make other configurations as required.&lt;/p&gt;


&lt;h2&gt;
  
  
  Install WordPress on your Web Server EC2 Instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Update the repository&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxht8vf89jxgckb9i7wp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxht8vf89jxgckb9i7wp3.png" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install wget, Apache and it’s dependencies&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum -y install wget httpd php php-mysqlnd php-fpm php-json
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j0cthhbsru5ixaej1vf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j0cthhbsru5ixaej1vf.png" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start the apache service&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable httpd
sudo systemctl start httpd
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkj7aaqbd9wv0ho5pljm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkj7aaqbd9wv0ho5pljm.png" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install PHP and it’s depemdencies&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo yum install yum-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm
sudo yum module list php
sudo yum module reset php
sudo yum module enable php:remi-7.4
sudo yum install php php-opcache php-gd php-curl php-mysqlnd
sudo systemctl start php-fpm
sudo systemctl enable php-fpm
setsebool -P httpd_execmem 1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hwbak1t8lq7in14ifgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hwbak1t8lq7in14ifgv.png" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart Apache&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart httpd
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6jzf1otgfe5ddqmj1hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6jzf1otgfe5ddqmj1hw.png" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download wordpress and copy wordpress to var/www/html&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir wordpress
cd   wordpress
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzvf latest.tar.gz
sudo rm -rf latest.tar.gz
cp wordpress/wp-config-sample.php wordpress/wp-config.php
cp -R wordpress /var/www/html/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e3cu53dkodkvrpxuwqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e3cu53dkodkvrpxuwqy.png" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure SELinux Policies&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R apache:apache /var/www/html/wordpress
sudo chcon -t httpd_sys_rw_content_t /var/www/html/wordpress -R
sudo setsebool -P httpd_can_network_connect=1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp0yd06w15cfpmnxd7ee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp0yd06w15cfpmnxd7ee.png" width="800" height="27"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Install MySQL on your DB Server EC2
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install mysql on the db-server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update
sudo yum install mysql-server
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57idr3n6rpzae7c6notj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57idr3n6rpzae7c6notj.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We now need to verify that the service is up and running by using sudo systemctl status mysqld, if it is not running, restart the service and enable it so it will be running even after reboot:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart mysqld
sudo systemctl enable mysqld
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jti62lqbqv9memdulaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jti62lqbqv9memdulaq.png" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Configuring the DB to work with WordPress
&lt;/h2&gt;

&lt;p&gt;Here we need to configure the database to work with WordPress. By allowing the wordpress server be able to connect to the database, we need to configure the database to allow the wordpress server to connect to the database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Get the ip-address of the wordpress server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://checkip.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvosomzp78m8i40s5fjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvosomzp78m8i40s5fjd.png" width="800" height="59"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We need to then create a user for the wordpress server to connect to the database.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql
CREATE DATABASE wordpress;
CREATE USER `myuser`@`&amp;lt;Web-Server-Private-IP-Address&amp;gt;` IDENTIFIED BY 'mypass';
GRANT ALL ON wordpress.* TO 'myuser'@'&amp;lt;Web-Server-Private-IP-Address&amp;gt;';
FLUSH PRIVILEGES;
SHOW DATABASES;
exit
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pqs0xxbinpffow3ts9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pqs0xxbinpffow3ts9h.png" width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Configure WordPress to connect to remote database.
&lt;/h2&gt;

&lt;p&gt;Here we are to open MySQL port 3306 on DB Server EC2. For extra security, you shall allow access to the DB server ONLY from your Web Server’s IP address, so in the Inbound Rule configuration specify source as /32.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install MySQL client and test that you can connect from your Web Server to your DB server by using mysql-client&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install mysql
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlwhitaihgn7rn6hvjnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlwhitaihgn7rn6hvjnr.png" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create login to the database on the db server
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql -u &amp;lt;user&amp;gt; -p -h &amp;lt;DB-Server-Private-IP-address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Note the  is the name of the user you created in mysql server on the db server.&lt;br&gt;
Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslayki63vwrxm066ghyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslayki63vwrxm066ghyg.png" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify if you can successfully execute SHOW DATABASES; command and see a list of existing databases.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHOW DATABASES;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht9kwnvykpwg28toleo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht9kwnvykpwg28toleo7.png" width="427" height="274"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change permissions and configuration so Apache could use WordPress:&lt;br&gt;
Here we need to create a configuration file for wordpress in order to point client requests to the wordpress directory.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/httpd/conf.d/wordpress.conf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and copy and paste the lines below:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost *:80&amp;gt;
ServerAdmin myuser@3.88.215.221
DocumentRoot /var/www/html/wordpress

&amp;lt;Directory "/var/www/html/wordpress"&amp;gt;
Options Indexes FollowSymLinks
AllowOverride all
Require all granted
&amp;lt;/Directory&amp;gt;

ErrorLog /var/log/httpd/wordpress_error.log
CustomLog /var/log/httpd/wordpress_access.log common
&amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focijtg6pkm8nd466p0kq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focijtg6pkm8nd466p0kq.png" width="793" height="415"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To apply the changes, restart Apache&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart httpd
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0sr25qxcsfp5jgrkhtj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0sr25qxcsfp5jgrkhtj.png" width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the wp-config file&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /var/www/html/wordpress/wp-config.php
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;define('DB_NAME', 'wordpress');
define('DB_USER', 'myuser');
define('DB_PASSWORD', 'mypass');
define('DB_HOST', '&amp;lt;db-Server-Private-IP-Address&amp;gt;');
define('DB_CHARSET', 'utf8mb4');
define('DB_COLLATE', '');
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv5jjswx7egy3gpvqq8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv5jjswx7egy3gpvqq8a.png" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;configure SELinux for wordpress&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/wordpress/.*?"
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;b&gt;Note:&lt;/b&gt; The semanage command is not available on CentOS 7.x.x. and you might need to install it using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum provides /usr/sbin/semanage
sudo yum install policycoreutils-python-utils
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2x0pbf58rk7hwvgagoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2x0pbf58rk7hwvgagoz.png" width="800" height="19"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Try to access from your browser the link to your WordPress&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;Web-Server-Public-IP-Address&amp;gt;/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwvoqf8qaohjkwc73feo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwvoqf8qaohjkwc73feo.png" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>MEAN Stack Implementation on an AWS EC2 instance</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Thu, 04 Aug 2022 13:25:17 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/mean-stack-implementation-on-an-aws-ec2-instance-4gec</link>
      <guid>https://forem.com/mannyuncharted/mean-stack-implementation-on-an-aws-ec2-instance-4gec</guid>
      <description>&lt;p&gt;This is a second part to my other article on &lt;a href="https://dev.to/mannyuncharted/web-stack-implementation-lamp-stack-in-aws-11j0"&gt;Lamp stack Implementation in AWS&lt;/a&gt;&lt;a&gt;.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have already talked about a technology stack, as they are a set of tools used to develop a software product. In this article, I would be implementing a simple book register web form application on the MEAN stack on an AWS EC2 instance. We would be using AngularJs as the frontend framework, which forms the MEAN STACK.&lt;br&gt;
MEAN stack is a stack of technologies that are used to build web applications and these are the technologies we will be using in this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB: Which is a NoSQL, document-based database used to store application data in form of a document.&lt;/li&gt;
&lt;li&gt;ExpressJS: A server-side web application framework for Node.js.&lt;/li&gt;
&lt;li&gt;Angular Js: A client-side web application framework for JavaScript as its used to handle client and server requests.&lt;/li&gt;
&lt;li&gt;Node.js: A JavaScript runtime environment. It’s used to run JavaScript on a machine rather than in a browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this tutorial, we would be working on the following components of the MEAN stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing NodeJs on our server.&lt;/li&gt;
&lt;li&gt;Installing MongoDB.&lt;/li&gt;
&lt;li&gt;Install Express and set up routes to the server.&lt;/li&gt;
&lt;li&gt;Accessing the routes with AngularJS.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Installing NodeJs on our server
&lt;/h2&gt;

&lt;p&gt;Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js is used in this tutorial to set up the Express routes and AngularJS controllers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Updating the ubuntu server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep1scolhjo3czk79ttv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep1scolhjo3czk79ttv1.png" alt="Update the ubuntu server" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;upgrading the ubuntu server&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get upgrade
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb953cw4l2u4u998jh1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb953cw4l2u4u998jh1p.png" alt="upgrade the ubuntu server" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adding the required certificates&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvbpyzjms7plrlpqvewl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvbpyzjms7plrlpqvewl.png" alt="Adding the required certificates" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy3v94o9pgaq5szttrzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy3v94o9pgaq5szttrzm.png" alt="Adding the required certificates" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing NodeJs&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install nodejs
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1esc5guqvu4b96nizxwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1esc5guqvu4b96nizxwe.png" alt="Installing NodeJs" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that were done with the primary steps of installing Node.js, we can move on to the next step of installing MongoDB.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing MongoDB
&lt;/h2&gt;

&lt;p&gt;MongoDB stores data in flexible, JSON-like documents. Fields in a database can vary from document to document and data structure can be changed over time. For our example application, we are adding book records to MongoDB that contain book name, isbn number, author, and number of pages.&lt;br&gt;
images/WebConsole.gif&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;installing mongodb key configurations settings&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then running this command to add the repository to the list of repositories:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mm7voa4nbqbpx6wnmyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mm7voa4nbqbpx6wnmyv.png" alt="installing mongodb key configurations settings" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb9p5p82h0lwxj6g73x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb9p5p82h0lwxj6g73x9.png" alt="installing mongodb key configurations settings" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing MongoDB&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y mongodb
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Note: if you are using the latest version of ubuntu jammy you can use the following command instead of the previous one:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install gnupg
echo "deb http://security.ubuntu.com/ubuntu impish-security main" | sudo tee /etc/apt/sources.list.d/impish-security.list

sudo apt-get update

sudo apt-get install libssl1.1
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and import the public key using this command&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and add mongodb to the sources list and then install mongodb&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list

sudo apt update

sudo apt install -y mongodb-org
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;after installing enable mongodb using the command&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable mongod
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzkae01q9twkhsrp6qjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzkae01q9twkhsrp6qjp.png" alt="Installing MongoDB" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Starting MongoDB&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service mongod start
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zk2l2cng8opwc1a2o0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zk2l2cng8opwc1a2o0b.png" alt="Starting MongoDB" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verifying that the service is up and running&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status mongod
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv6ujz7olkic7mq91ddw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv6ujz7olkic7mq91ddw.png" alt="Verifying that mongodb is running" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing NPM - Node Package Manager&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install -y npm
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegh4ro4e4lxo17wwo7k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegh4ro4e4lxo17wwo7k3.png" alt="Installing NPM" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next we need to install body-parser package&lt;br&gt;
The ‘body-parser’ package to help us process JSON files passed in requests to the server.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo npm install body-parser
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4l9180fcahlq33pv3yi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4l9180fcahlq33pv3yi.png" alt="Installing body-parser" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;creating a folder named books and navigating into the folder&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir books &amp;amp;&amp;amp; cd books
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2y12j5rhtbony434tmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2y12j5rhtbony434tmy.png" alt="creating a folder named books and navigating into the folder" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the Books directory, we need to Initialize npm project&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2zxb40mtiwvlix0fc4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2zxb40mtiwvlix0fc4s.png" alt="Initialize npm project" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add a file to it named server.js&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano server.js
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then add the following code to the server.js file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var express = require('express');
var bodyParser = require('body-parser');
var app = express();
app.use(express.static(__dirname + '/public'));
app.use(bodyParser.json());
require('./apps/routes')(app);
app.set('port', 3300);
app.listen(app.get('port'), function() {
    console.log('Server up: http://localhost:' + app.get('port'));
});
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ycmokp27u9ho4x5mlqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ycmokp27u9ho4x5mlqi.png" alt="Add a file to it named server.js" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Install Express and setting up routes to the server.
&lt;/h2&gt;

&lt;p&gt;Now that we've created our server, we need to install the Express framework and set up the routes to the server.&lt;/p&gt;

&lt;p&gt;Express is a minimal and flexible Node.js web application framework that provides features for web and mobile applications. We will use Express in to pass book information to and from our MongoDB database.&lt;/p&gt;

&lt;p&gt;We also will use Mongoose package which provides a straight-forward, schema-based solution to model your application data. We will use Mongoose to establish a schema for the database to store data of our book register.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Installing Express and Mongoose&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo npm install express mongoose
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffecserr048x61gnm81pm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffecserr048x61gnm81pm.png" alt="Installing Express" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In ‘Books’ folder, create a folder named apps&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir apps &amp;amp;&amp;amp; cd apps
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Now we need to create a file called routes.js&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano routes.js
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then add the following code to the routes.js file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var Book = require('./models/book');
module.exports = function(app) {
app.get('/book', function(req, res) {
    Book.find({}, function(err, result) {
    if ( err ) throw err;
    res.json(result);
    });
}); 
app.post('/book', function(req, res) {
    var book = new Book( {
    name:req.body.name,
    isbn:req.body.isbn,
    author:req.body.author,
    pages:req.body.pages
    });
    book.save(function(err, result) {
    if ( err ) throw err;
    res.json( {
        message:"Successfully added book",
        book:result
    });
    });
});
app.delete("/book/:isbn", function(req, res) {
    Book.findOneAndRemove(req.query, function(err, result) {
    if ( err ) throw err;
    res.json( {
        message: "Successfully deleted the book",
        book: result
    });
    });
});
var path = require('path');
app.get('*', function(req, res) {
    res.sendfile(path.join(__dirname + '/public', 'index.html'));
});
};
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctaeifs91rm7kc1uv3yy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctaeifs91rm7kc1uv3yy.png" alt="creating a file called routes.js" width="792" height="631"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the ‘apps’ folder, create a folder named models&lt;br&gt;
as this would hold all the models for the application.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir models &amp;amp;&amp;amp; cd models
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4jbeaeki5psi0bxrmyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4jbeaeki5psi0bxrmyx.png" alt="creating a folder named models and navigating into the folder" width="800" height="66"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In our models folder create a file named book.js&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano book.js
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then add the following code to the book.js file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mongoose = require('mongoose');
var dbHost = 'mongodb://localhost:27017/test';
mongoose.connect(dbHost);
mongoose.connection;
mongoose.set('debug', true);
var bookSchema = mongoose.Schema( {
name: String,
isbn: {type: String, index: true},
author: String,
pages: Number
});
var Book = mongoose.model('Book', bookSchema);
module.exports = mongoose.model('Book', bookSchema);
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6uawtpbhvt9a7ph3nro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6uawtpbhvt9a7ph3nro.png" alt="creating a file named book.js" width="766" height="436"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Accessing the routes with AngularJS
&lt;/h2&gt;

&lt;p&gt;AngularJS provides a web framework for creating dynamic views in your web applications. In this tutorial, we use AngularJS to connect our web page with Express and perform actions on our book register.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Now we need to change the directory back to ‘Books’&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ../..
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx3c70nw3wrtdb19dacs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx3c70nw3wrtdb19dacs.png" alt="changing the directory back to ‘Books’" width="775" height="75"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the books directory, create a folder named public and navigate into it.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir public &amp;amp;&amp;amp; cd public
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45fsqt16hnnnmqlcoo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45fsqt16hnnnmqlcoo1.png" alt="creating a folder named public and navigating into the folder" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then in the public directory, create a file script.js&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano script.js
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then add the following code to the script.js file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var app = angular.module('myApp', []);
app.controller('myCtrl', function($scope, $http) {
$http( {
    method: 'GET',
    url: '/book'
}).then(function successCallback(response) {
    $scope.books = response.data;
}, function errorCallback(response) {
    console.log('Error: ' + response);
});
$scope.del_book = function(book) {
    $http( {
    method: 'DELETE',
    url: '/book/:isbn',
    params: {'isbn': book.isbn}
    }).then(function successCallback(response) {
    console.log(response);
    }, function errorCallback(response) {
    console.log('Error: ' + response);
    });
};
$scope.add_book = function() {
    var body = '{ "name": "' + $scope.Name + 
    '", "isbn": "' + $scope.Isbn +
    '", "author": "' + $scope.Author + 
    '", "pages": "' + $scope.Pages + '" }';
    $http({
    method: 'POST',
    url: '/book',
    data: body
    }).then(function successCallback(response) {
    console.log(response);
    }, function errorCallback(response) {
    console.log('Error: ' + response);
    });
};
});
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09m2f01utcrbhbiwaby5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09m2f01utcrbhbiwaby5.png" alt="creating a file named script.js" width="724" height="640"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the public directory create a file called index.html&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano index.html
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and then add the following code to the index.html file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!doctype html&amp;gt;
&amp;lt;html ng-app="myApp" ng-controller="myCtrl"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="script.js"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
    &amp;lt;div&amp;gt;
    &amp;lt;table&amp;gt;
        &amp;lt;tr&amp;gt;
        &amp;lt;td&amp;gt;Name:&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;&amp;lt;input type="text" ng-model="Name"&amp;gt;&amp;lt;/td&amp;gt;
        &amp;lt;/tr&amp;gt;
        &amp;lt;tr&amp;gt;
        &amp;lt;td&amp;gt;Isbn:&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;&amp;lt;input type="text" ng-model="Isbn"&amp;gt;&amp;lt;/td&amp;gt;
        &amp;lt;/tr&amp;gt;
        &amp;lt;tr&amp;gt;
        &amp;lt;td&amp;gt;Author:&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;&amp;lt;input type="text" ng-model="Author"&amp;gt;&amp;lt;/td&amp;gt;
        &amp;lt;/tr&amp;gt;
        &amp;lt;tr&amp;gt;
        &amp;lt;td&amp;gt;Pages:&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;&amp;lt;input type="number" ng-model="Pages"&amp;gt;&amp;lt;/td&amp;gt;
        &amp;lt;/tr&amp;gt;
    &amp;lt;/table&amp;gt;
    &amp;lt;button ng-click="add_book()"&amp;gt;Add&amp;lt;/button&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;hr&amp;gt;
    &amp;lt;div&amp;gt;
    &amp;lt;table&amp;gt;
        &amp;lt;tr&amp;gt;
        &amp;lt;th&amp;gt;Name&amp;lt;/th&amp;gt;
        &amp;lt;th&amp;gt;Isbn&amp;lt;/th&amp;gt;
        &amp;lt;th&amp;gt;Author&amp;lt;/th&amp;gt;
        &amp;lt;th&amp;gt;Pages&amp;lt;/th&amp;gt;

        &amp;lt;/tr&amp;gt;
        &amp;lt;tr ng-repeat="book in books"&amp;gt;
        &amp;lt;td&amp;gt;{{book.name}}&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;{{book.isbn}}&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;{{book.author}}&amp;lt;/td&amp;gt;
        &amp;lt;td&amp;gt;{{book.pages}}&amp;lt;/td&amp;gt;

        &amp;lt;td&amp;gt;&amp;lt;input type="button" value="Delete" data-ng-click="del_book(book)"&amp;gt;&amp;lt;/td&amp;gt;
        &amp;lt;/tr&amp;gt;
    &amp;lt;/table&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p492xbu5h8sfr2gipmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p492xbu5h8sfr2gipmx.png" alt="creating a file named index.html" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change the directory back to Books&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ..
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ahs6jgxcptm2u0o8hf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ahs6jgxcptm2u0o8hf9.png" alt="changing the directory back up to Books" width="642" height="73"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start the server by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node server.js
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5asmn1gx6zktgcr596s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5asmn1gx6zktgcr596s.png" alt="starting the server" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The server is now up and running, we can connect it via port 3300. You can launch a separate Putty or SSH console to test what curl command returns locally.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s http://localhost:3300
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s5tfp0miibwipwd0icy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s5tfp0miibwipwd0icy.png" alt="curl command" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Expose the tcp 3300 port on your system and then get the public ip address of your system.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt; curl -s http://169.254.169.254/latest/meta-data/public-ipv4 
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmqubb8gnt82io8mujax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmqubb8gnt82io8mujax.png" alt="exposing the tcp 3300 port on your system" width="459" height="289"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Implementing the client-server architecture with MYSQL</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Wed, 27 Jul 2022 12:39:03 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/implementing-the-client-server-architecture-with-mysql-41an</link>
      <guid>https://forem.com/mannyuncharted/implementing-the-client-server-architecture-with-mysql-41an</guid>
      <description>&lt;p&gt;Client-Server refers to an architecture in which two or more computers are connected together over a network to send and receive requests between one another.&lt;/p&gt;

&lt;p&gt;In their communication, each machine has its own role: the machine sending requests is usually referred as "Client" and the machine responding (serving) is called "Server".&lt;/p&gt;

&lt;p&gt;In this case, our Web Server has a role of a "Client" that connects and reads/writes to/from a Database (DB) Server (MySQL, MongoDB, Oracle, SQL Server or any other), and the communication between them happens over a Local Network (it can also be Internet connection, but it is a common practice to place Web Server and DB Server close to each other in local network).&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing a Client Server Architecture using MySQL Database Management System (DBMS).
&lt;/h2&gt;

&lt;p&gt;In this tutorial I would be demonstrating a basic client-server using MySQL Relational Database Management System (RDBMS), follow the below instructions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create and configure two Linux-based virtual servers (EC2 instances in AWS) with names.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Server A name - `mysql server`
Server B name - `mysql client`
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On mysql server Linux Server install MySQL Server software.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mysql-server
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0uk8agbzk02awikblt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0uk8agbzk02awikblt5.png" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On mysql client Linux Server install MySQL Client software.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mysql-client
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqjtdncce5c79y5sh7p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqjtdncce5c79y5sh7p6.png" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;By default, both of your EC2 virtual servers are located in the same local virtual network, so they can communicate to each other using local IP addresses. Use mysql server's local IP address to connect from mysql client. MySQL server uses TCP port 3306 by default, so you will have to open it by creating a new entry in ‘Inbound rules’ in ‘mysql server’ Security Groups. For extra security, do not allow all IP addresses to reach your ‘mysql server’ – allow access only to the specific local IP address of your ‘mysql client’.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F566qrhsxhzzdc7p5ee7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F566qrhsxhzzdc7p5ee7q.png" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You might need to configure MySQL server to allow connections from remote hosts.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;and replace the following line:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind-address = 127.0.0.1 
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;with the following line:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind-address = 0.0.0.0
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z64ar0x81cpfjrmojrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z64ar0x81cpfjrmojrx.png" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In your mysql-server ec2 instance allow the ipaddress of the mysql-client ec2 instance with port 3306 to access the mysql-server ec2 instance.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://checkip.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qm5xqi518ymxleinkgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qm5xqi518ymxleinkgm.png" width="800" height="19"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On your mysql-server create a new user using the following code&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER 'newuser2'@'44.203.195.178' IDENTIFIED BY 'password2';
GRANT ALL PRIVILEGES ON * . * TO 'newuser2'@'44.203.195.178';
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Note: '44.203.195.178' is the ipaddress of your mysql-client&lt;br&gt;
Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta71ko4bmolykf32tuts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta71ko4bmolykf32tuts.png" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To connect to the mysql server, in your mysql client enter the following code.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u newuser2 -h 18.234.44.89 -p
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Note: The format is&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u &amp;lt;user&amp;gt; -h &amp;lt;ipaddress-of-the-server&amp;gt; -p
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;and run the following&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;show databases;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz7ye6hyd0q7nph1bxb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz7ye6hyd0q7nph1bxb6.png" width="399" height="336"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a career path to software development with boot.dev</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Tue, 26 Jul 2022 11:07:00 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/building-a-career-path-to-software-development-with-bootdev-5f64</link>
      <guid>https://forem.com/mannyuncharted/building-a-career-path-to-software-development-with-bootdev-5f64</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0ucdfw51hqwfudzvfp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0ucdfw51hqwfudzvfp5.png" alt="Medium" width="602" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey, new to the world of software engineering and tired of watching several tutorial videos on coding after watching them, you keep asking yourself what do I learn next. There are many resources out there that gives you information on different tools and languages but most of the time they never give you a particular order in which you should learn them.&lt;/p&gt;

&lt;p&gt;Many new learners after watching hours of YouTube videos, find out they can’t build the most basic applications when they start a project alone. There’s nothing worse than wandering around from programming language to programming language and framework to framework trapped in the “tutorial hell”. Early on in my career, I was once like that, at the sight of a new programming language I get the urge to want to try it out. Not to say it’s a bad thing but for a beginner, this won’t help you grasp the concepts you need, after learning the concepts of programming language with one language, build things on your own to ensure you’ve grasped the concepts you just learned, only then can you switch to a different language.&lt;br&gt;
The best way to learn is to write real code through building projects on your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;About Boot.dev&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is why I’m introducing you to boot.dev, it’s a one-stop platform to help you learn about computer science concepts without the bore of a college degree or the rigor or fast pace of a Bootcamp. And as our world is becoming more tech-literate every day, it is gradually becoming the in thing to learn how to code. Naturally, an understanding of computer science concepts would go a long way to differentiate you from the average developer.&lt;/p&gt;

&lt;p&gt;So if you’re a beginner looking for where to kick-start or a competent developer looking to refresh your knowledge of computer science concepts and fundamentals, you should get started with boot.dev. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;About the Curriculum offered&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Boot.dev offers a simple, linear curriculum that’s designed to prepare you for an entry-level role in backend development. And below is a list of topics offered in the curriculum.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learn to code in JavaScript.&lt;/li&gt;
&lt;li&gt;Learn Graphics in HTML5 Canvas&lt;/li&gt;
&lt;li&gt;Learn About HTTP&lt;/li&gt;
&lt;li&gt;Learn Python&lt;/li&gt;
&lt;li&gt;Learn Object Oriented Programming&lt;/li&gt;
&lt;li&gt;Building an SEO Link Analyzer in Python&lt;/li&gt;
&lt;li&gt;Learn Algorithms &lt;/li&gt;
&lt;li&gt;Learn Data Structures&lt;/li&gt;
&lt;li&gt;Learn About Advanced Algorithms&lt;/li&gt;
&lt;li&gt;Building a Maze Solver in Python&lt;/li&gt;
&lt;li&gt;Building a Personal Project 1&lt;/li&gt;
&lt;li&gt;Learn Go&lt;/li&gt;
&lt;li&gt;Building a Social Media Backend in Go&lt;/li&gt;
&lt;li&gt;Learn about Cryptography&lt;/li&gt;
&lt;li&gt;Learn about Functional Programming&lt;/li&gt;
&lt;li&gt;And Finally a Capstone Project
This Approach of project-based learning has been proven to help students have a good grasp of things they learn. One thing I found interesting is the JavaScript course, I personally detest JavaScript but they way it's explained here make it easy to understand and grasp. I also took great interest in their cryptography course, as I'm not from a computer science background I have to comb through the internet and YouTube for useful resources to help me learn these concepts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr9qtgip2mio3vds32do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr9qtgip2mio3vds32do.png" alt="Simply Get clients" width="557" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to get started&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To get started with this wonderful curriculum you just need to visit boot.dev and then signup with your GitHub account if you have one or you google account, I’m sure everyone has one and if you don’t have a google account you can create one at google.com, then you signup with either of the accounts that you have. &lt;/p&gt;

&lt;p&gt;You have access to the first 2 chapters of every course on Boot.dev are totally free and open. The more advanced chapters are also free, but only in "sandbox mode". While in sandbox mode you can read the lessons, write and run code, you just can't pass-off the assignments. To get access to the entire platform you would be required to pay either a monthly plan of $29, a yearly plan of $192, or a one-time payment of $499. I personally fill this curriculum is worth the price and should very well get you ready for your entry-level job as a backend developer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsouthwestkey.org%2Fwp-content%2Fuploads%2F2021%2F08%2FSWK_Community_Based_Programs_Featured_Image_Final.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsouthwestkey.org%2Fwp-content%2Fuploads%2F2021%2F08%2FSWK_Community_Based_Programs_Featured_Image_Final.png" alt="Southwest key Programs" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Support&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A good thing about boot.dev is its support community there on Discord that provides access to other learners like you and also access to mentors who are always ready to provide assistance whenever you get stuck, There is something magical about learning with others as the motivation provided by a good group of peers is strong. You can keep each other accountable and move forward.&lt;/p&gt;

&lt;p&gt;Get started today! &lt;a href="https://boot.dev/" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>coding</category>
      <category>beginners</category>
      <category>programming</category>
      <category>career</category>
    </item>
    <item>
      <title>Consensus Protocols (part 1)</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Sun, 17 Jul 2022 20:12:05 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/consensus-protocols-part-1-1nl6</link>
      <guid>https://forem.com/mannyuncharted/consensus-protocols-part-1-1nl6</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;A dive into the underlying structure of the blockchain Ecosystem&lt;/strong&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this series, I will be taking a look at the several consensus protocols and trying to recreate them all in Python and compare their advantages and disadvantages.&lt;/p&gt;

&lt;p&gt;We've all heard about the blockchain and how it's the web's future, and there are a lot of individuals who are looking into the possibilities, but still, the majority of the blockchain's applications that we see in the mainstream media are the introduction of cryptocurrencies and decentralized finance (DE-FI). We'll go under the surface in this article to see how blockchain networks come to an agreement on fresh block generation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;According to &lt;a href="https://www.euromoney.com/learning/blockchain-explained/what-is-blockchain" rel="noopener noreferrer"&gt;Euromoney&lt;/a&gt;, a blockchain is a digital ledger of transactions that is duplicated and distributed across the entire network of computer systems on the blockchain. Each block in the chain contains several transactions, and every time a new transaction occurs on the blockchain, a record of that transaction is added to every participant’s ledger. The decentralized database managed by multiple participants is known as Distributed Ledger Technology (DLT).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A blockchain, in my opinion, is similar to a group of people having information about each other in such a way that it is difficult for anyone outside the group to change information about a member, and before a new member of the group is added, all members of the group must agree. This makes it nearly impossible to cheat the system or the group.&lt;/p&gt;

&lt;p&gt;So the blockchain works by adding blocks of data, and an agreement must be reached to ensure that every block added to the chain is true and based on the agreement of all or a majority of the nodes in the system.&lt;/p&gt;

&lt;p&gt;The agreement that must be reached by the group is called a "consensus" as it is used for verifying information authenticity.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are consensus protocols?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1649699623722/DqF6eEAu9.jpg" rel="noopener noreferrer"&gt;Consensus-Protocols-Pros-and-Cons.jpg&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.google.com/url?sa=i&amp;amp;url=https%3A%2F%2Fapplicature.com%2Fblog%2Fblockchain-technology%2Fconsensus-protocol&amp;amp;psig=AOvVaw1sbycidC_kB_RWUf5YqF7o&amp;amp;ust=1649785842542000&amp;amp;source=images&amp;amp;cd=vfe&amp;amp;ved=0CAsQjhxqFwoTCPCMoLfJjPcCFQAAAAAdAAAAABAD" rel="noopener noreferrer"&gt;Image source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Consensus protocols serve as the underlying structure of the blockchain as it’s a key feature of the decentralized nature of the blockchain. it helps all the nodes in the network verify the authenticity of transactions. These protocols are set to help to provide a method of confirming what data should be added to the blockchain. These protocols exist to create a balance as no central authority dictates what is right or wrong, therefore all nodes of the blockchain must follow a set of predefined rules or protocols on how data is added to the blockchain. &lt;/p&gt;

&lt;p&gt;We have several blockchain networks emerging today, each with their own consensus protocol/mechanism for reaching an agreement over a transaction or the creation of a new block.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;**Note:&lt;/em&gt;* Nodes are members of the network or certain users within the network responsible for verifying transactions. *&lt;/p&gt;

&lt;h2&gt;
  
  
  Purpose of Consensus protocols
&lt;/h2&gt;

&lt;p&gt;As we now know, the function of consensus protocols is to help nodes (participants) on a blockchain network agree on choosing what data should be added and what data shouldn’t be added to the network. This provides a level of security to the network. &lt;/p&gt;

&lt;p&gt;Types of Consensus&lt;/p&gt;

&lt;p&gt;Generally, for a consensus protocol to be implemented, it must overcome something called "The Byzantine Fault." Today, with several blockchain networks coming up, each has their own methods of overcoming this problem.&lt;/p&gt;

&lt;p&gt;The Byzantine Fault&lt;/p&gt;

&lt;p&gt;The notion of the Byzantine fault can be traced back to the &lt;a href="https://en.wikipedia.org/wiki/Byzantine_fault_tolerance#Byzantine_Generals'_Problem" rel="noopener noreferrer"&gt;Byzantine General’s problem&lt;/a&gt;, which is a condition where nodes in a distributed system fail and there are imperfections in whether a component has failed, and this can lead to the entire system being taken down. &lt;/p&gt;

&lt;p&gt;In a Byzantine fault, a component such as a server can inconsistently appear both failed and functioning to failure detection systems, presenting different symptoms to different observers. It is difficult for the other components to declare it failed and shut it out of the network because they need to first reach an agreement regarding which component has failed in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Byzantine Fault Tolerance (BFT)
&lt;/h3&gt;

&lt;p&gt;This is a condition particular to distributed systems where a distributed system can remain fault-tolerant in the presence of malicious actors and imperfections on the network. This problem was developed to describe a situation in which the system’s actors or distributed systems must agree on a concerted strategy, to avoid a catastrophic failure of the system, assuming that some of the actors or nodes in the system are unreliable. &lt;/p&gt;

&lt;p&gt;The goal of a BFT mechanism is to guard against system failures by employing collective decision making(both – correct and faulty nodes) which aims to reduce to influence of the faulty nodes&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does BFT Apply to Blockchain?
&lt;/h3&gt;

&lt;p&gt;Its relation to blockchain networks is that the network must be able to reach a certain level of agreement because some of the nodes participating in the agreement can choose to be deceitful.&lt;/p&gt;

&lt;p&gt;As the Byzantine Fault Tolerance helps protect the network from dangerous system failures and ensures the ideal functioning of the network. It allows for both the honest and malicious nodes to carry out their intended tasks without affecting the overall network’s performance and only allows the agreement made by a majority of the honest nodes to be passed.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Practical Byzantine Fault Tolerance(P-BFT) consensus Algorithm
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pmg.csail.mit.edu/papers/osdi99.pdf" rel="noopener noreferrer"&gt;The Practical Byzantine Fault Tolerance&lt;/a&gt; consensus algorithm was introduced in the late 90s by Barbara Liskov and Miguel Castro.&lt;/p&gt;

&lt;p&gt;Their research work provided a high-performance Byzantine state machine replication, processing thousands of requests per second with a sub-millisecond increase in latency.&lt;/p&gt;

&lt;p&gt;Generally taking an overview, the &lt;strong&gt;P-BFT&lt;/strong&gt; is a consensus algorithm that can withstand &lt;strong&gt;&lt;em&gt;byzantine faults,&lt;/em&gt;&lt;/strong&gt; a process where a group of distributed systems has to reach an agreement because some of the nodes in the system can be faulty or compromised.&lt;/p&gt;

&lt;p&gt;In a pBFT system, the nodes are sequentially ordered with one node being the leader and others referred to as backup nodes. It applies the majority rule in reaching an agreement based on the information from all the non-compromised nodes while communicating with each other.&lt;/p&gt;

&lt;p&gt;For a system implementing the pBFT consensus algorithm to function effectively, the number of compromised nodes must not equal or exceed one-third of all nodes in the system in a given vulnerability. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;given by the formula [(n-1)/3]&lt;br&gt;
.,&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;where n = the total number of nodes in the system.&lt;/p&gt;

&lt;p&gt;The more nodes there are in a pBFT system, the more secure it becomes.&lt;/p&gt;

&lt;h3&gt;
  
  
  How a pBFT algorithm reaches a consensus?
&lt;/h3&gt;

&lt;p&gt;The pBFT consensus rounds are called views and are broken into 4 phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A client sends a request to the leader node to invoke a service operation.&lt;/li&gt;
&lt;li&gt;The leading node broadcasts the request to the backup nodes.&lt;/li&gt;
&lt;li&gt;The nodes execute the request, then send a reply to the client&lt;/li&gt;
&lt;li&gt;The client awaits  \( f + 1 \) replies from different nodes with the same result, where f, equals the maximum number of potentially faulty nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As the pBFT consensus requires a leader which stands in place between the client and the backup nodes. During every view (consensus round) the leading node is changed and can be replaced with a protocol called a view change if a certain amount of time has passed without the leading node broadcasting the request. If a leader is determined to be faulty or compromised, the majority of honest nodes can replace the leader node with the next node in line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Projects Implementing the pBFT consensus Algorithm
&lt;/h2&gt;

&lt;p&gt;Many projects are currently implementing the pBFT consensus algorithm.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hyperledger Fabric: This i&lt;/strong&gt;s an open-source collaborative environment by the Linux Foundation for blockchain hosted by it. It is a permissioned version of the pBFT for the platform. Since permissioned chains use small consensus groups and don’t need the same decentralization as public blockchains, pBFT is effective for providing high transaction throughput.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ziliqa:&lt;/strong&gt; ziliqa uses an optimized version of classical pBFT to achieve consensus about data on the blockchain. Ziliqa also implements the proof-of-work consensus around every 100 blocks to perform &lt;strong&gt;&lt;em&gt;network sharding&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network sharding:&lt;/strong&gt; network sharding involves splitting miners into smaller groups known as &lt;strong&gt;&lt;em&gt;shards&lt;/em&gt;&lt;/strong&gt;. Where each shard is capable of processing transactions in parallel, yielding a high transaction throughput for the network.&lt;/p&gt;

&lt;p&gt;The network uses multi-signatures to reduce the communication overhead of classical pBFT, compared to MACs (Message Authentication Codes), being implemented in the system.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits of the pBFT
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transaction finality:&lt;/strong&gt; The transactions performed on a network implementing the pBFT consensus do not require multiple confirmations after they have been finalized.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;💡 **Note:*** unlike Bitcoin where every node individually confirms all the transactions before adding the new block to the blockchain; at times these confirmations take from as low as 10 minutes to an hour, the pBFT does not require multiple transaction confirmations.*
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Energy efficiency:&lt;/strong&gt; pBFT achieves distributed consensus without having to carry out complex mathematical computations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low reward variance:&lt;/strong&gt; Every node in the network has a part to play, as it requires a collective decision made by a majority of honest nodes in responding to the request by the client, which allows for every node to be incentivized leading to low variance in rewarding the nodes that help in decision making.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Disadvantages of the pBFT consensus Algorithm
&lt;/h3&gt;

&lt;p&gt;The pBFT works efficiently well only when the number of nodes in the network is small.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scaling:&lt;/strong&gt; when it comes to the issue of scaling the pBFT consensus falls short as it becomes inefficient for large networks. This is because each node is required to communicate with every other node to keep the network secure, and this task of communicating with every node increases communication cost and overall decreases transaction throughput as the amount of nodes scales is increased.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sybil attacks:&lt;/strong&gt; Sybil attacks involve when a single party creates and manipulates a large number of nodes in the network and compromises security. This threat is reduced with larger network sizes but considering the scalability issue of pBFT, this makes it susceptible to Sybil attacks. For most networks that implement pBFT, they are used in combination with another consensus mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Applications of pBFT for Social Good.
&lt;/h2&gt;

&lt;p&gt;The mainstream application we see of blockchains and consensus mechanisms are cryptocurrencies and De-FI (Decentralized Finance). And while these applications are great, I want to explore other applications of the blockchain and see how its impact can help the immediate society.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Health Care:&lt;/strong&gt; Nowadays, it’s quite difficult to receive treatment from a particular hospital that isn’t the one you’re registered with especially when you’re on a trip. The pBFT mechanism can be implemented for hospitals whereby hospitals stand in as the node and patients’ information is shared amongst the hospitals in the case where the patients need access to them and aren’t within their registered hospital. This would make treating patients easier as all it takes is accessing the information through a shared network for patients’ data and that node is verified as the honest by all the hospital nodes and the patient's records are easily fetched.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NGOs (Non Governmental organizations):&lt;/strong&gt; When it comes to the allocation of the resources raised by NGOs and ensuring that they are appropriately used is quite difficult. The pBFT consensus can be applied in helping those who donate to these organizations monitor and decide on how the funds are given are managed and utilized, eventually making the process transparent.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article is just a first-part series on looking beneath the blockchain and understanding how different blockchains reach consensus using various consensus protocols.&lt;/p&gt;

&lt;p&gt;We would seek to understand how these protocols work and I would replicate a scaled-down version of it with python and then we would see other applications of these consensus mechanisms and blockchain networks beyond the hype of cryptocurrencies, De-Fi, and NFTs.&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>consensusalgorithms</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>WEB STACK IMPLEMENTATION (LAMP STACK) IN AWS</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Fri, 15 Jul 2022 08:59:52 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/web-stack-implementation-lamp-stack-in-aws-11j0</link>
      <guid>https://forem.com/mannyuncharted/web-stack-implementation-lamp-stack-in-aws-11j0</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;What is a Technology stack?&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;A technology stack is a set of frameworks and tools used to develop a software product. This set of frameworks and tools are very specifically chosen to work together in creating a well-functioning software. They are acronymns for individual technologies used together for a specific technology productsome examples are…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LAMP (Linux, Apache, MySQL, PHP or Python, or Perl)&lt;/li&gt;
&lt;li&gt;LEMP (Linux, Nginx, MySQL, PHP or Python, or Perl)&lt;/li&gt;
&lt;li&gt;MERN (MongoDB, ExpressJS, ReactJS, NodeJS)&lt;/li&gt;
&lt;li&gt;MEAN (MongoDB, ExpressJS, AngularJS, NodeJS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The focus of this article is on lamp stack implementation on an AWS EC2 instance.&lt;/p&gt;

&lt;p&gt;LAMP stands for Linux, Apache, MySQL, and PHP. Together, they provide a proven set of software for delivering high-performance web applications. Each component contributes essential capabilities to the stack:&lt;/p&gt;

&lt;p&gt;Linux: The operating system. It is a free and open source operating system (OS) that has been around since the mid-1990s. Linux is popular in part because it offers more flexibility and configuration options than some other operating systems.&lt;/p&gt;

&lt;p&gt;Apache: The web server. The Apache web server processes requests and serves up web assets via HTTP so that the application is accessible to anyone in the public domain over a simple web URL. It is developed and maintained by an open community, Apache is a mature, feature-rich server that runs a large share of the websites currently on the internet. &lt;/p&gt;

&lt;p&gt;MySQL: The database. MySQL is an open source relational database management system for storing application data. With My SQL, you can store all your information in a format that is easily queried with the SQL language. SQL is a great choice if you are dealing with a business domain that is well structured, and you want to translate that structure into the backend.&lt;/p&gt;

&lt;p&gt;PHP: The programming language. The PHP open source scripting language works with Apache to help you create dynamic web pages. You cannot use HTML to perform dynamic processes such as pulling data out of a database. If you prefer, you can swap out PHP in favor of Perl or the increasingly popular Python language.&lt;/p&gt;

&lt;h2&gt;
  
  
  How each technology interacts
&lt;/h2&gt;

&lt;p&gt;At a high-level overview the order of execution shows how the elements interoperate.&lt;br&gt;
The process begins when the Apache web server receives requests for web pages from a user’s browser. If the request is for a PHP file, Apache passes the request to PHP, which loads the file and executes the code contained in the file. PHP also communicates with MySQL to fetch any data referenced in the code. &lt;/p&gt;

&lt;p&gt;PHP then uses the code in the file and the data from the database to create the HTML that browsers require to display web pages. The LAMP stack is efficient at handling not only static web pages, but also dynamic pages where the content may change each time it is loaded depending on the date, time, user identity and other factors. &lt;/p&gt;

&lt;p&gt;After running the file code, PHP then passes the resulting data back to the Apache web server to send to the browser. It can also store this new data in MySQL. And of course, all of these operations are enabled by the Linux operating system running at the base of the stack.&lt;/p&gt;
&lt;h3&gt;
  
  
  Objectives of today's article
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;setting up an AWS EC2 instance.&lt;/li&gt;
&lt;li&gt; installing apache and updating the firewall.&lt;/li&gt;
&lt;li&gt; installing mysql.&lt;/li&gt;
&lt;li&gt; installing php.&lt;/li&gt;
&lt;li&gt; creating a virtual host for your website using apache.&lt;/li&gt;
&lt;li&gt; enable php on the website.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setting up an AWS EC2 instance.
&lt;/h3&gt;

&lt;p&gt;After creating an aws account, you can create an EC2 instance &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Connect to the EC2 instance on your local machine using.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i "your_key_name.pem" ec2-user@your_instance_ip
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruatuyq27va1eia0nbj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruatuyq27va1eia0nbj0.png" alt="connecting to the instance on your local machine" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;running updates on the instance using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uaf6cx3mrz2v8cugaq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uaf6cx3mrz2v8cugaq1.png" alt="updating the instance on your local machine" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  INSTALLING APACHE AND UPDATING THE FIREWALL
&lt;/h3&gt;

&lt;p&gt;Apache is an opensource software that runs on a server. It is used to serve web pages.&lt;br&gt;
In this step, we will install apache and update the firewall on our EC2 instance we have connected to remotely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install apache using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install apache2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyq8no1ietpv4kghzkvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyq8no1ietpv4kghzkvu.png" alt="installing apache on the instance on your local machine" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;connect to apache2 to verify that its running as a service on our instance.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status apache2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmb9yo13avglp7qwcbkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmb9yo13avglp7qwcbkq.png" alt="verifying that apache is running on the instance on your local machine" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After running the command and you see the output change to green and running it means it was successful.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In order to receive any traffic from our Web Server, we need to open TCP port 80 which is the default port that web browsers use to access web pages on the Internet. &lt;br&gt;
And by default we have port 22 open on our EC2 instance. To access it via SSH, so we need to add a rule to EC2 configuration to open inbound connection through port 80.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To test the connection and access it locally through our ubuntu machine, we will use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lp4q5uk2kjzqvb9qe2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lp4q5uk2kjzqvb9qe2n.png" alt="testing the connection to the instance on your local machine" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To retrieve the Public IP address, other than to check it in AWS Web console, is to use following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s http://169.254.169.254/latest/meta-data/public-ipv4

&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4dn4npts1zy1f7magqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4dn4npts1zy1f7magqy.png" alt="retrieving the public ip address of the instance on your local machine" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;we can then open a web browser and navigate to the public ip address of our EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;your_public_ip_address&amp;gt;:80

&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbwhhb6g0fhyhtyc942n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbwhhb6g0fhyhtyc942n.png" alt="navigating to the public ip address of the instance on your local machine" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  INSTALLING MYSQL
&lt;/h3&gt;

&lt;p&gt;MySQL is a database management system. It is used to store and retrieve data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;we would need to install mysql using this command&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mysql-server
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb0f6afh59xtnhbkwc1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb0f6afh59xtnhbkwc1l.png" alt="installing mysql on the instance on your local machine" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
Note: you would be required to confirm the installation type "y" and click enter key.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;login to mysql to service on our instance.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;This will connect to the MySQL server as the administrative database user root, which is inferred by the use of sudo when running this command. You should see output like this.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0wittm509wdpqv9jak0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0wittm509wdpqv9jak0.png" alt="verifying that mysql is running on the instance on your local machine" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;start the interactive script by running:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql_secure_installation
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;This will ask if you want to configure the VALIDATE PASSWORD PLUGIN.&lt;br&gt;
If you choose to do so, you will be prompted to enter the root password for MySQL.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fyiuukldkua78329f9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fyiuukldkua78329f9s.png" alt="starting the interactive script on the instance on your local machine" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When you’re finished, test if you’re able to log in to the MySQL console by typing:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql -p
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoj9qnfucb20t8lw90tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoj9qnfucb20t8lw90tl.png" alt="testing the connection to the instance on your local machine" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The -p flag in this command, which will prompt you for the password used after changing the root user password.&lt;br&gt;
To exit the MySQL console.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  INSTALLING PHP
&lt;/h3&gt;

&lt;p&gt;PHP is a server-side scripting language designed for web development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In addition to installing the php package, we need php-mysql, a PHP module that allows PHP to communicate with MySQL&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install php libapache2-mod-php php-mysql
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrbf77x8od6fy0u11ive.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrbf77x8od6fy0u11ive.png" alt="installing php on the instance on our EC2 Instance" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To verify that php is installed and check the version that was installed, we can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;php -v
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgwxarska1zf298slo0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgwxarska1zf298slo0z.png" alt="verifying that php is installed on the instance on our EC2 Instance" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  CREATING A VIRTUAL HOST FOR YOUR WEBSITE USING APACHE
&lt;/h3&gt;

&lt;p&gt;Virtual host allows you to have multiple websites located on a single machine and users of the websites will not even notice it.&lt;/p&gt;

&lt;p&gt;As we continue with this project, we would set up a domain called projectlamp, it can be replaced this with any domain of your choice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create the directory for projectlamp using ‘mkdir’ command&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /var/www/projectlamp
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Next, assign ownership of the directory with your current system user&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R $USER:$USER /var/www/projectlamp
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;We would then create and open a new configuration file in Apache’s sites-available directory using  using vi or vim&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/apache2/sites-available/projectlamp.conf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Note: this creates a blank file. Pressing "i" on keyboard to enter insert mode. Then paste the following configuration into the file.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost *:80&amp;gt;
    ServerName projectlamp
    ServerAlias www.projectlamp 
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/projectlamp
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
&amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Then "ESC" and ":wq" to write and click the "ENTER/RETURN" key to save.&lt;/p&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv588mslndhf0suz98zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv588mslndhf0suz98zv.png" alt="creating the configuration file for projectlamp on the instance on our EC2 Instance" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can use the ls command to show the new file in the sites-available directory.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ls /etc/apache2/sites-available/
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
You will see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;000-default.conf  default-ssl.conf  projectlamp.conf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;With this VirtualHost configuration, we’re telling Apache to serve projectlamp using /var/www/projectlampl as its web root directory. If you would like to test Apache without a domain name, you can remove or comment out the options ServerName and ServerAlias by adding a # character in the beginning of each option’s lines. Adding the # character there will tell the program to skip processing the instructions on those lines.&lt;/p&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;You can now use a2ensite command to enable the new virtual host.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2ensite projectlamp.conf
&lt;/code&gt;&lt;/pre&gt;



&lt;ul&gt;
&lt;li&gt;You can use a2dissite command to disable the new virtual host.
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2dissite 000-default
&lt;/code&gt;&lt;/pre&gt;



&lt;ul&gt;
&lt;li&gt;To make sure your configuration file doesn’t contain syntax errors, run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apache2ctl configtest
&lt;/code&gt;&lt;/pre&gt;



&lt;ul&gt;
&lt;li&gt;If there are no errors, you can restart Apache with the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart apache2
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiketiz1k4hcktcsurvoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiketiz1k4hcktcsurvoa.png" alt="enabling the new virtual host on the instance on our EC2 Instance" width="800" height="851"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  ENABLING PHP ON THE WEBSITE
&lt;/h3&gt;

&lt;p&gt;With the default DirectoryIndex settings on Apache, a file named index.html will always take precedence over an index.php file. This is useful for setting up maintenance pages in PHP applications.&lt;/p&gt;

&lt;p&gt;In case we want to change this behavior, you’ll need to edit the /etc/apache2/mods-enabled/dir.conf file and change the order in which the index.php file is listed within the DirectoryIndex directive&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We change this behavior by editing the dir conf file:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/apache2/mods-enabled/dir.conf
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;and replacing the files content with&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
&amp;lt;IfModule mod_dir.c&amp;gt;
    #Change this:
    #DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
    #To this:
    DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
&amp;lt;/IfModule&amp;gt;
```
&lt;/code&gt;&lt;/pre&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;After saving and closing the file, you will need to reload Apache so the changes take effect:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload apache2
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, we will create a PHP script to test that PHP is correctly installed and configured on your server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have a custom location to host your website’s files and folders, we’ll create a PHP test script to confirm that Apache is able to handle and process requests for PHP files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a new file in the /var/www/projectlamp directory called index.php&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /var/www/projectlamp/index.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Paste the following code into the file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
phpinfo();
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Then we save and close the file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And then refresh the webpage open on the web browser, to see something like this.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fmanny-uncharted%2Fproject-1%2Fraw%2Fmain%2Fimg%2FPHP8.1.2-phpinfo%28%29-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fmanny-uncharted%2Fproject-1%2Fraw%2Fmain%2Fimg%2FPHP8.1.2-phpinfo%28%29-.png" alt="phpinfo()" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After checking the relevant information about your PHP server through that page, it’s best to remove the file you created as it contains sensitive information about your PHP environment -and your Ubuntu server. You can use rm to do so:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm /var/www/projectlamp/index.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffzkmwn5clzh4k423zx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffzkmwn5clzh4k423zx3.png" alt="editing the dir conf file on the instance on our EC2 Instance" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we've been able to implement the web stack on an EC2 instance. In the following articles i'll also be explaining how to implement other technology stacks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>php</category>
      <category>mysql</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Fetching an Ethereum Contract's ABI</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Thu, 07 Apr 2022 11:51:15 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/fetching-an-ethereum-contracts-abi-56gk</link>
      <guid>https://forem.com/mannyuncharted/fetching-an-ethereum-contracts-abi-56gk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;With me being relatively new to the blockchain space and all.&lt;/p&gt;

&lt;p&gt;I encountered an issue while trying to work on a bot recently as I was to get the smart contract ABI(&lt;strong&gt;Application Binary Interface&lt;/strong&gt;) to interact with the contract itself and what I saw online required me to have the source code, paste it on remix, and recompile and all. And all that was a whole lot of stress since I'm still relatively new to solidity, armed with python skills and my background as a data person, I just had to find a way out.&lt;/p&gt;

&lt;p&gt;Upon further research and with my experience working with Algorand Indexer on algorand blockchain. I decided to look up etherscan for their open API endpoints and found something helpful.&lt;/p&gt;

&lt;p&gt;And since every information about the blockchain including all the smart contract information can be accessed by every one, I used etherscans API endpoint to interact with it.&lt;/p&gt;

&lt;p&gt;I wrote a simple script that allows a person to fetch the contract ABI from the contract address and append it to a json file.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an ABI?
&lt;/h2&gt;

&lt;p&gt;ABI stands for &lt;strong&gt;Application Binary Interface&lt;/strong&gt;, which is an interface between two program modules, of which one of the modules is at the level of machine code. The ABI serves as a medium for which data is encoded or decoded into/out of the machine code.&lt;/p&gt;

&lt;h3&gt;
  
  
  It's relation to EVM(Ethereum Virtual Machine) and Smart contracts.
&lt;/h3&gt;

&lt;p&gt;The major component of the Ethereum network is the EVM, which allows for smart contracts stored on the Ethereum blockchain to be executed.&lt;br&gt;
This smart contracts are often written in high-level languages and need to be compiled to EVM executable bytecode, such that when a smartcontract is deployed, the high-level code is compiled into EVM executable bytecode which is them stored on the blockchain with an associated address.&lt;br&gt;
The ABI helps to specify which function in the binary smart contract deployed on the EVM to call, and guarantees that the function returns data in the expected format, which is very much similar to how an API(Application Program Interface) works but on a much lower-level.&lt;/p&gt;

&lt;p&gt;You can check out the repository on &lt;a href="https://github.com/manny-uncharted/fetch_contract_abi" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;clone the project from github using:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="n"&gt;clone&lt;/span&gt; &lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;manny&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;uncharted&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;fetch_contract_abi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Enter into the project directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2. cd fetch_contract_abi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the file
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python abi_fetch.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would require you to enter the smart contract address (&lt;code&gt;You can get the contract address from etherscan.io&lt;/code&gt;) and then saves the contract ABI in json, with the name abi.json in the project directory. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F809qfxfna8qgs2xdtww1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F809qfxfna8qgs2xdtww1.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note: This program was written to only work for ERC-721 smart contracts, I would further update the code to work for other networks and other token types.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>tutorial</category>
      <category>python</category>
      <category>showdev</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>My Uncharted Story.</title>
      <dc:creator>Emmanuel Akanji</dc:creator>
      <pubDate>Thu, 30 Dec 2021 13:09:41 +0000</pubDate>
      <link>https://forem.com/mannyuncharted/my-uncharted-story-4jih</link>
      <guid>https://forem.com/mannyuncharted/my-uncharted-story-4jih</guid>
      <description>&lt;p&gt;_Note: I'm not so great at telling amazing stories like you read in novels, but I guess i'll try my best. To all my book lovers out there, more books to chow down the coming year.&lt;br&gt;
_&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Just a regular conversation about my year 2021.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After reading the article(more like a year in review), though he doesn't like to call it that) of one of my mentors he's yet to know, please don't spill the beans till I summon courage to tell him "Steve Kolawole". &lt;a href="https://stevenkolawole.medium.com/second-summary-of-my-journals-the-mountains-we-climb-5d1a28a57deb" rel="noopener noreferrer"&gt;link to his review here&lt;/a&gt;. I decided to give mine a try out as there are few stories I would like to tell about my Journey.&lt;/p&gt;




&lt;p&gt;Just like a few months ago when I started the new year with plans to land a job as an entry level data analyst. So I made plans to work seriously on that, while combining my entrepreneurship journey which I started the previous year 2020. And that year was filled with a lot of setbacks, binged worked for like a month(literally spending nothing less than 20hours coding, reading, and doing research each day), which led to me falling sick for like 3 months, and eventually got my gadgets seized with intense monitoring on the time I spend on my phone, my laptop was basically seized till the end of the year 2020. &lt;/p&gt;

&lt;p&gt;One lesson I picked up was that good health was underrated, so now I try as much as possible to delegate tasks, though it sometimes comes biting you back when you delegate and the person under delivers, doing this so I can have a very good sleep at night. &lt;/p&gt;

&lt;p&gt;Now, back to my 2021 journey, the new year started with me as young analyst and a budding entrepreneur as my team qualified for the hultprize nationals Yay!. All bumped up and filled with great expectations.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;January&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Like I said I was all hyped up in January since my team qualified for the 2020 hultprize nationals and we were making plans to travel to Calabar for the event and with that coming along I was making plans landing a role as an entry level data analyst. So regarding the role, with my previous knowledge of excel, took courses that really bumped up my excel skills, and till now I can confidently build ERP applications in excel. &lt;em&gt;By the way excel is a lot more powerful than we think.&lt;/em&gt; During which I learnt about using macros, power query, pivot tables and data visualization concepts. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We were still making plans to go to Calabar and it was quite the stress as my parents said they didn't want me to go by road and had to change plans and budget about how to raise funds for all 3 of my teammates to go by air. A basic summary of my January was all about learning.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;February&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Moving forward to February, at this time did a few projects in excel and still have one pending cos I haven't found time to finish it or lemme say the truth, I generally lost interest in the project. After doing a few excel projects decided excel was a fully wrap and dived into PowerBI, chose BI over tableau because from my research there lots of similarities between Excel and BI.&lt;br&gt;
BI was a breeze for me since I really took time to ground myself in excel pretty well and did projects there with PowerBI. And to my entrepreneurial journey, received a mail around late February, that due to covid-19 regulations the hultprize nationals slated to happen in March in Calabar was cancelled, this was really upsetting but looking back now, I'm grateful it happen as we were able to breeze the nationals as they selected manually and we later got a mail in march that we qualified for the regionals _ more info on that in the story below_&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March - July&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By March, I decided to take my personal development seriously like I did in 2020, by resuming my reading spree and splurge on books. By reading books about entrepreneurship, personal development (Shoe dog memoire, Ashlee Vance's book on Elon Musk, 5am clock, and a few other which I began for the entire year) and continued working on visualization projects and machine learning competitions. Around June I began my job hunt quest, and it wasn't easy lots of rejections, and qualified for a few and later was dropped off due to location after interviews. I recalled one I had already qualified done the interview was to start work the following Monday and that Saturday I was sent a document to fill which required me to enter my Social Security Number. And that was the beginning of the end of my Job, didn't have one and then reached out to the HR that I was from a Nigeria which doesn't use SSN, not knowing that was the end of my job role, even though the role was a remote one. Later that even she responded that the role was for those who stay within the US even though the role was a remote role. I was completely heartbroken for days after when I was rejoicing that I have joined the ranks of the big boys.&lt;/li&gt;
&lt;li&gt;And regarding my entrepreneurship side of things we did qualify for the regionals which was a stage that involved countries across Africa and Mehn I was hyped and also scared at the same time. The day of the pitch competition at the regionals came and we did come out as having one of the best ideas in the competition, but there was also a Nigerian team there that had a similar idea as my team, so the judges suggested a partnership that would have been a banger and push we two teams to the finals (world stage). It didn't work out well as the team that was told to reach out to us didn't but went behind our backs and added the features of our pitch to their, cos it was more like an open pitch where every team could listen. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The judges did good though so as not to hurt anyone and make things fair we were both disqualified and another team was promoted to the finals. What they did was more like backstabbing and I was hurt and angry at the same time. Looking back now I'm very much happy that we didn't merge cos their true colors would have eventually shown up, and that set back did put me down up till the point that I became tired of continuing and going on, like things regarding the startup became messy, the team not responding and things going silent it was more of a low and silent period for me. At this point I was contemplating literally taking a break from school, tech and entrepreneurship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;August&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;By August though I was still applying for analyst roles I took a step back and began reviewing what I wanted to actually do. This month was more of a reflective period for me on my entrepreneurship, tech career, well as for school was just moving through it all. Registered for another competition in school Spirit of Enterprise and came out in the top 4 in the competition, it was a real boost then. Then came the setback which was a member of the team telling me he wanted to drop off to focus on his final year in school, I supported him but it sent my already delicate entrepreneurship career into another point of reconsidering do I stop or continue. It got to a point when I had talks with my cousin and told him about everything, he was willing to support me taking a break for at least a year and landing a good job and joining him outside, sounded like a pretty good idea to me back then, this young man didn't take it up though. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In regards to my tech career I was still applying on the side but not as intensive as before cos, just me taking up personal projects and getting to review my career choice of which I eventually decided to go for what I really wanted. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now that I recall, I wanted to go ahead of myself by submitting an application to Ycombinator🤣🤣🤣🤣, me I was trying to chill with the big boys with an idea that was capitally intensive and not sustainable in Nigeria which I realized later towards the end of the year in an incubators programme I qualified for. The rejection mail came in but it was softer as they gave their comments on my application. I then applied to a Nigerian University focused incubators programme Inqumax, though I reluctantly applied but it became the best thing that happened to me towards the end of the year.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;September&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At this time I came to understand that my background in SWE wasn't great, as my goal was to transition from being a data analyst to ML Engineering. Took a gradual turn off from ML competitions and slowly restarted the whole process which had me learning python all over again😢😢😢. And mehn it was one hell of a struggle for me. But I'm glad I did and reading books on core concepts in computer science, as a lot of people who are getting started in ML think they don't need to understand ML concepts but this year proved me wrong as I had to pause and restart my journey with SWE involved. Have been able to build projects like bots with python, then also working with APIs while building trading algorithms. Did a small trade algorithms that computes the amount of stock a person can buy based on their portfolio size and a number of cool python projects though they can't get me a fulltime job but come in handy along the way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And towards the end of September, had my qualifying pitch for my entry into the incubators programme and I qualified. little did I know that I just signed up for three months of intensive work, but overall it was very much worth it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Now looking at it I think I really had a long year, time to wrap things up and summarize the rest of my year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;October - December&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At this point I was fully focused on consuming knowledge on software engineering and reading articles on Machine Learning and taking courses on mathematics of machine learning while building on my SWE skillset and with very much less of machine learning except when I had to prepare for the bootcamp , I only made one submission that allowed me to qualify and when people ask for help with working on their models, aside that it was more of learning and coding in python. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regarding my entrepreneurship career, my programme with Inqumax commenced in October and it was one of the best things that happened to me, having mentors like Henry Ukoha, Benjamin Udokwu and Cynthia Chisom, who were always there to guide me through the process of refining my idea, the programme was great but the deliverables were challenging and it forced me to learn how to overly plan and prioritize my activities to the very extreme, as I always wanted to meet up with the deadline and then it brought me out of my comfort zone and gave me to an extent the ability to share my startup idea and goals before anyone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I recall a time during the programme an investor was brought in for a demo pitch and after doing my pitch she literally rubbished my whole idea, though she was right but it did make my mood for the entire week end very bad. Imagine having someone rubbish all you spent a lot of time refining, even though she gave valid point it got to me.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;u&gt;&lt;strong&gt;Highlight of my wins&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Deciding to restart my whole ML career with software engineering in focus. Though it initially seemed tough but I'm grateful I did.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Volunteering to become very active in the DSN and GDSC community. This was a great opportunity to meet indirect mentors that helped motivate me and push me to keep going.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Joining Inqumax incubator's programme. This so far has been one of my biggest wins this year. As it has helped build my time management, people skills and confidence in approaching challenges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refining my idea into becoming a semi-startup. Yeah I'm almost at that step where I can call myself a founder, been a very long three months of hard work that paid off. The startup is focus on building solutions for the agricultural sector.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Getting to understand the agricultural sector and first hand interact with those in the space. Joined PARI(Program of Accompanying Research for Agricultural Innovation) workshop in November and enjoyed the discussion with them about the future of Agriculture in Nigeria. Also joined Farmcrowdy's anniversary event where important discussions regarding the challenges of the agricultural sector in Nigeria and what are the next steps to achieve improvement.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Lessons learnt so far&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I've learnt never to hold onto people so much that you can't let them go or do without them as life happens, one might not understand the reasons why they are making such decisions. This was as a result of me losing 2 teammates out of 4 of us this year, it was tough balancing the workload but well we move on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Starting tech career all over again with SWE in mind was daunting but it made the journey clearer and faster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Never to advice anyone just getting started in ML to think about doing ML engineering without SWE in focus, it eventually pays off.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Found out I never had a social life this year, trying to improve on that next year and open to meeting new intelligent minds to have brilliant conversations with.&lt;/p&gt;

&lt;p&gt;Glad I met up with new friends or people I became much closer to.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;New year milestones&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Becoming a ML engineer building E2E projects that are also decentralized (literally merging ML engineering with blockchain).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contributing to opensource much more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Landing a full-time role by mid-next year.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Working on onboarding at least 500 signups for my startup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attending Tech Events and Meeting new friends.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Speaking at one tech event before the end of the year.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Investing in setting up my workspace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dedicating more time to having fun and twitter memes to cool off.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Let see what this new year brings and how we can go about it. Fully into smashing big milestones.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>storytelling</category>
      <category>machinelearning</category>
      <category>selfimprovement</category>
    </item>
  </channel>
</rss>
