<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nikhil Raj A </title>
    <description>The latest articles on Forem by Nikhil Raj A  (@nikhilraj-2003).</description>
    <link>https://forem.com/nikhilraj-2003</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nikhilraj-2003"/>
    <language>en</language>
    <item>
      <title>Building a Netflix Clone with DevSecOps: A Complete DevSecOps Project.</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Sun, 13 Jul 2025 04:27:14 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/building-a-netflix-clone-with-devsecops-a-complete-devsecops-project-1ll6</link>
      <guid>https://forem.com/nikhilraj-2003/building-a-netflix-clone-with-devsecops-a-complete-devsecops-project-1ll6</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In today’s rapidly evolving cloud landscape, &lt;strong&gt;DevSecOps&lt;/strong&gt; is no longer optional — it is essential for delivering secure, scalable, and high-quality applications. This project demonstrates &lt;strong&gt;how to deploy a Netflix Clone on AWS using Jenkins CI/CD pipelines&lt;/strong&gt;, Docker, SonarQube, Trivy, Prometheus, and Grafana while adhering to &lt;strong&gt;DevSecOps best practices&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  AWS Account.&lt;/li&gt;
&lt;li&gt;  Basic Git, Docker,Kubernetes and Linux knowledge.&lt;/li&gt;
&lt;li&gt;  Jenkins and CI/CD familiarity.&lt;/li&gt;
&lt;li&gt;  TMDB API Key for the Netflix clone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0hzfwzqgq63dplgjbbj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0hzfwzqgq63dplgjbbj.gif" alt="Workflow of the Project" width="760" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1 : Initial Setup and Deployment
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 1: Launch EC2 (Ubuntu 22.04):&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Provision an EC2 instance on AWS with Ubuntu 22.04, t2.large and 25 GB storage.&lt;/li&gt;
&lt;li&gt;  Connect to the instance using SSH.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 2: Clone the Code:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Update all the packages and then clone the code.&lt;/li&gt;
&lt;li&gt;  Clone your application’s code repository onto the EC2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/NikhilRaj-2003/devsecops-netflix-clone.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 3: Install Docker and Run the App Using a Container:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Set up Docker on the EC2 instance:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker $USER  # Replace with your system's username, e.g., 'ubuntu'
newgrp docker
sudo chmod 777 /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Build and run your application using Docker containers:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t netflix .
docker run -d --name netflix -p 8081:80 netflix:latest
#to delete
docker stop &amp;lt;containerid&amp;gt;
docker rmi -f netflix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will show an error cause you need API key&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Get the API Key:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Open a web browser and navigate to TMDB (The Movie Database) website.&lt;/li&gt;
&lt;li&gt;  Click on “Login” and create an account.&lt;/li&gt;
&lt;li&gt;  Once logged in, go to your profile and select “Settings.”&lt;/li&gt;
&lt;li&gt;  Click on “API” from the left-side panel.&lt;/li&gt;
&lt;li&gt;  Create a new API key by clicking “Create” and accepting the terms and conditions.&lt;/li&gt;
&lt;li&gt;  Provide the required basic details and click “Submit.”&lt;/li&gt;
&lt;li&gt;  You will receive your TMDB API key.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Now recreate the Docker image with your api key:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --build-arg TMDB_V3_API_KEY=&amp;lt;your-api-key&amp;gt; -t netflix .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuib5zegkvf0q8epr6hbr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuib5zegkvf0q8epr6hbr.jpeg" alt="Netflix Clone" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 2: Intializing Security for checking code quality and image quality .
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Install SonarQube and Trivy:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Install SonarQube and Trivy on the EC2 instance to scan for vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;sonarqube :&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To access:&lt;/p&gt;

&lt;p&gt;publicIP:9000 (by default username &amp;amp; password is admin)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Trivy :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;to scan image using trivy&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trivy image &amp;lt;imageid&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 3: CI/CD Setup.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;1. Install Jenkins for Automation:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Install Jenkins on the EC2 instance to automate deployment: Install Java
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update 
sudo apt install fontconfig openjdk-17-jre 
java -version openjdk version "17.0.8" 2023–07–18
 OpenJDK Runtime Environment (build 17.0.8+7-Debian-1deb12u1) 
OpenJDK 64-Bit Server VM (build 17.0.8+7-Debian-1deb12u1, mixed mode, sharing) 
#jenkins sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \ 
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key 
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ 
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \ /etc/apt/sources.list.d/jenkins.list &amp;gt; /dev/null 
sudo apt-get update sudo apt-get install jenkins 
sudo systemctl start jenkins 
sudo systemctl enable jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Access Jenkins in a web browser using the public IP of your EC2 instance.&lt;/li&gt;
&lt;li&gt;  publicIp:8080&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;2. Install Necessary Plugins in Jenkins:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Goto Manage Jenkins →Plugins → Available Plugins →&lt;/p&gt;

&lt;p&gt;Install below plugins :&lt;/p&gt;

&lt;p&gt;1 &lt;strong&gt;Eclipse Temurin Installer&lt;/strong&gt; (Install without restart)&lt;/p&gt;

&lt;p&gt;2 &lt;strong&gt;SonarQube Scanner&lt;/strong&gt; (Install without restart)&lt;/p&gt;

&lt;p&gt;3 &lt;strong&gt;NodeJs Plugin&lt;/strong&gt; (Install Without restart)&lt;/p&gt;

&lt;p&gt;4 &lt;strong&gt;Email Extension Plugin&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Java and Nodejs in Global Tool Configuration
&lt;/h2&gt;

&lt;p&gt;Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SonarQube&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Create the token&lt;/li&gt;
&lt;li&gt;  Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this&lt;/li&gt;
&lt;li&gt;  After adding sonar token&lt;/li&gt;
&lt;li&gt;  Click on Apply and Save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Configure System&lt;/strong&gt; option is used in Jenkins to configure different server&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Tool Configuration&lt;/strong&gt; is used to configure different tools that we install using Plugins&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We will install a sonar scanner in the tools.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a Jenkins webhook&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Configure CI/CD Pipeline in Jenkins:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Create a CI/CD pipeline in Jenkins to automate your application deployment.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent any
    tools {
        jdk 'jdk17'
        nodejs 'node16'
    }
    environment {
        SCANNER_HOME = tool 'sonar-scanner'
    }
    stages {
        stage('clean workspace') {
            steps {
                cleanWs()
            }
        }
        stage('Checkout from Git') {
            steps {
                git branch: 'main', url: https://github.com/NikhilRaj-2003/devsecops-netflix-clone.git'
            }
        }
        stage("Sonarqube Analysis") {
            steps {
                withSonarQubeEnv('sonar-server') {
                    sh '''$SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix'''
                }
            }
        }
        stage("quality gate") {
            steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
                }
            }
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Certainly, here are the instructions without step numbers:&lt;/p&gt;

&lt;p&gt;Install Dependency-Check and Docker Tools in Jenkins&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Install Dependency-Check Plugin:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Go to “Dashboard” in your Jenkins web interface.&lt;/li&gt;
&lt;li&gt;  Navigate to “Manage Jenkins” → “Manage Plugins.”&lt;/li&gt;
&lt;li&gt;  Click on the “Available” tab and search for “OWASP Dependency-Check.”&lt;/li&gt;
&lt;li&gt;  Check the checkbox for “OWASP Dependency-Check” and click on the “Install without restart” button.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Configure Dependency-Check Tool:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  After installing the Dependency-Check plugin, you need to configure the tool.&lt;/li&gt;
&lt;li&gt;  Go to “Dashboard” → “Manage Jenkins” → “Global Tool Configuration.”&lt;/li&gt;
&lt;li&gt;  Find the section for “OWASP Dependency-Check.”&lt;/li&gt;
&lt;li&gt;  Add the tool’s name, e.g., “DP-Check.”&lt;/li&gt;
&lt;li&gt;  Save your settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Install Docker Tools and Docker Plugins:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  Go to “Dashboard” in your Jenkins web interface.&lt;/li&gt;
&lt;li&gt;  Navigate to “Manage Jenkins” → “Manage Plugins.”&lt;/li&gt;
&lt;li&gt;  Click on the “Available” tab and search for “Docker.”&lt;/li&gt;
&lt;li&gt;  Check the following Docker-related plugins:&lt;/li&gt;
&lt;li&gt;  Docker&lt;/li&gt;
&lt;li&gt;  Docker Commons&lt;/li&gt;
&lt;li&gt;  Docker Pipeline&lt;/li&gt;
&lt;li&gt;  Docker API&lt;/li&gt;
&lt;li&gt;  docker-build-step&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click on the “Install without restart” button to install these plugins.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add DockerHub Credentials:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  To securely handle DockerHub credentials in your Jenkins pipeline, follow these steps:&lt;/li&gt;
&lt;li&gt;  Go to “Dashboard” → “Manage Jenkins” → “Manage Credentials.”&lt;/li&gt;
&lt;li&gt;  Click on “System” and then “Global credentials (unrestricted).”&lt;/li&gt;
&lt;li&gt;  Click on “Add Credentials” on the left side.&lt;/li&gt;
&lt;li&gt;  Choose “Secret text” as the kind of credentials.&lt;/li&gt;
&lt;li&gt;  Enter your DockerHub credentials (Username and Password) and give the credentials an ID (e.g., “docker”).&lt;/li&gt;
&lt;li&gt;  Click “OK” to save your DockerHub credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, you have installed the Dependency-Check plugin, configured the tool, and added Docker-related plugins along with your DockerHub credentials in Jenkins. You can now proceed with configuring your Jenkins pipeline to include these tools and credentials in your CI/CD process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node16'
    }
    environment {
        SCANNER_HOME=tool 'sonar-scanner'
    }
    stages {
        stage('clean workspace'){
            steps{
                cleanWs()
            }
        }
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/N4si/DevSecOps-Project.git'
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' 
                }
            } 
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . &amp;gt; trivyfs.txt"
            }
        }
        stage("Docker Build &amp;amp; Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                       sh "docker build --build-arg TMDB_V3_API_KEY=&amp;lt;your-api-key&amp;gt;  -t netflix ."
                       sh "docker tag netflix nikhilaj2003/netflix:latest "
                       sh "docker push nikhilaj2003/netflix:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image nikhilaj2003/netflix:latest &amp;gt; trivyimage.txt" 
            }
        }
        stage('Deploy to container'){
            steps{
                sh 'docker run -d -p 8081:80 nikhilaj2003/netflix:latest'
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you get docker login failed error , then run this commands on your EC2 instance. Then build again .&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 4: Monitoring
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Install Prometheus and Grafana:&lt;/li&gt;
&lt;li&gt; Set up Prometheus and Grafana to monitor your application.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Installing Prometheus:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; First, create a dedicated Linux user for Prometheus and download Prometheus:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo useradd - system - no-create-home - shell /bin/false prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Extract Prometheus files, move them, and create directories:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
cd prometheus-2.47.1.linux-amd64/
sudo mkdir -p /data /etc/prometheus
sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set ownership for directories:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a systemd unit configuration file for Prometheus:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/prometheus.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the following content to the &lt;code&gt;prometheus.service&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/data \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \
  --web.enable-lifecycle
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Here’s a brief explanation of the key parts in this &lt;code&gt;prometheus.service&lt;/code&gt; file:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;User&lt;/code&gt; and &lt;code&gt;Group&lt;/code&gt; specify the Linux user and group under which Prometheus will run.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;ExecStart&lt;/code&gt; is where you specify the Prometheus binary path, the location of the configuration file (&lt;code&gt;prometheus.yml&lt;/code&gt;), the storage directory, and other settings.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;web.listen-address&lt;/code&gt; configures Prometheus to listen on all network interfaces on port 9090.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;web.enable-lifecycle&lt;/code&gt; allows for management of Prometheus through API calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Enable and start Prometheus:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable prometheus sudo systemctl start prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify Prometheus’s status:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; You can access Prometheus in a web browser using your server’s IP and port 9090:&lt;/li&gt;
&lt;li&gt; &lt;code&gt;http://&amp;lt;your-server-ip&amp;gt;:9090&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Installing Node Exporter:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Create a system user for Node Exporter and download Node Exporter:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo useradd --system --no-create-home --shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract Node Exporter files, move the binary, and clean up:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a systemd unit configuration file for Node Exporter:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/node_exporter.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the following content to the &lt;code&gt;node_exporter.service&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter --collector.logind
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Replace &lt;code&gt;--collector.logind&lt;/code&gt; with any additional flags as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable and start Node Exporter:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable node_exporter sudo systemctl start node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Verify the Node Exporter’s status:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access Node Exporter metrics in Prometheus.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Configure Prometheus Plugin Integration:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.&lt;/li&gt;
&lt;li&gt; Prometheus Configuration:&lt;/li&gt;
&lt;li&gt; To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the &lt;code&gt;prometheus.yml&lt;/code&gt; file. Here is an example &lt;code&gt;prometheus.yml&lt;/code&gt; configuration for your setup:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'node_exporter'
    static_configs:
      - targets: ['localhost:9100']
  - job_name: 'jenkins'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['&amp;lt;your-jenkins-ip&amp;gt;:&amp;lt;your-jenkins-port&amp;gt;']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make sure to replace &lt;code&gt;&amp;lt;your-jenkins-ip&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;your-jenkins-port&amp;gt;&lt;/code&gt; with the appropriate values for your Jenkins setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the validity of the configuration file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;promtool check config /etc/prometheus/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Reload the Prometheus configuration without restarting:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST http://localhost:9090/-/reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;You can access Prometheus targets at:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;your-prometheus-ip&amp;gt;:9090/targets&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Install Grafana on Ubuntu 22.04 and Set it up to Work with Prometheus&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 1: Install Dependencies:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, ensure that all necessary dependencies are installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 2: Add the GPG Key:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Add the GPG key for Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 3: Add Grafana Repository:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Add the repository for Grafana stable releases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 4: Update and Install Grafana:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Update the package list and install Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get -y install grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 5: Enable and Start Grafana Service:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To automatically start Grafana after a reboot, enable the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable grafana-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, start Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start grafana-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 6: Check Grafana Status:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Verify the status of the Grafana service to ensure it’s running correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status grafana-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Step 7: Access Grafana Web Interface:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Open a web browser and navigate to Grafana using your server’s IP address. The default port for Grafana is 3000. For example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;your-server-ip&amp;gt;:3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You’ll be prompted to log in to Grafana. The default username is “admin,” and the default password is also “admin.”&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 8: Change the Default Password:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you log in for the first time, Grafana will prompt you to change the default password for security reasons. Follow the prompts to set a new password.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 9: Add Prometheus Data Source:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To visualize metrics, you need to add a data source. Follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Click on the gear icon (⚙️) in the left sidebar to open the “Configuration” menu.&lt;/li&gt;
&lt;li&gt;  Select “Data Sources.”&lt;/li&gt;
&lt;li&gt;  Click on the “Add data source” button.&lt;/li&gt;
&lt;li&gt;  Choose “Prometheus” as the data source type.&lt;/li&gt;
&lt;li&gt;  In the “HTTP” section:&lt;/li&gt;
&lt;li&gt;  Set the “URL” to &lt;code&gt;http://localhost:9090&lt;/code&gt; (assuming Prometheus is running on the same server).&lt;/li&gt;
&lt;li&gt;  Click the “Save &amp;amp; Test” button to ensure the data source is working.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 10: Import a Dashboard:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To make it easier to view metrics, you can import a pre-configured dashboard. Follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Click on the “+” (plus) icon in the left sidebar to open the “Create” menu.&lt;/li&gt;
&lt;li&gt;  Select “Dashboard.”&lt;/li&gt;
&lt;li&gt;  Click on the “Import” dashboard option.&lt;/li&gt;
&lt;li&gt;  Enter the dashboard code you want to import (e.g., code 1860).&lt;/li&gt;
&lt;li&gt;  Click the “Load” button.&lt;/li&gt;
&lt;li&gt;  Select the data source you added (Prometheus) from the dropdown.&lt;/li&gt;
&lt;li&gt;  Click on the “Import” button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now have a Grafana dashboard set up to visualize metrics from Prometheus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l8p35dgf0vuwa1jeqgj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l8p35dgf0vuwa1jeqgj.jpeg" alt="Grafana Dashboard(nodeport)" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grafana is a powerful tool for creating visualizations and dashboards, and you can further customize it to suit your specific monitoring needs.&lt;/p&gt;

&lt;p&gt;That’s it! You’ve successfully installed and set up Grafana to work with Prometheus for monitoring and visualization.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Configure Prometheus Plugin Integration:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytzq3x08vkh4jeobnc6x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytzq3x08vkh4jeobnc6x.jpeg" alt="Grafana Dashboard(Jenkins)" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 5: Notification
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Implement Notification Services:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Set up email notifications in Jenkins or other notification mechanisms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Phase 6: Kubernetes
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Create Kubernetes Cluster with Nodegroups&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this phase, you’ll set up a Kubernetes cluster with node groups. This will provide a scalable environment to deploy and manage your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor Kubernetes with Prometheus&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prometheus is a powerful monitoring and alerting toolkit, and you’ll use it to monitor your Kubernetes cluster. Additionally, you’ll install the node exporter using Helm to collect metrics from your cluster nodes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Install Node Exporter using Helm
&lt;/h1&gt;

&lt;p&gt;To begin monitoring your Kubernetes cluster, you’ll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add the Prometheus Community Helm repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2 .Create a Kubernetes namespace for the Node Exporter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace prometheus-node-exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install the Node Exporter using Helm:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter - namespace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a Job to Scrape Metrics on nodeip:9001/metrics in prometheus.yml:&lt;/p&gt;

&lt;p&gt;Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- job_name: 'Netflix'
    metrics_path: '/metrics'
    static_configs:
      - targets: ['node1Ip:9100']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace ‘your-job-name’ with a descriptive name for your job. The static_configs section specifies the targets to scrape metrics from, and in this case, it’s set to nodeip:9001.&lt;/p&gt;

&lt;p&gt;Don’t forget to reload or restart Prometheus to apply these changes to your configuration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To deploy an application with ArgoCD, you can follow these steps, which I’ll outline in Markdown format:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Deploy Application with ArgoCD
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Install ArgoCD:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; You can install ArgoCD on your Kubernetes cluster by following the instructions provided in the &lt;a href="https://archive.eksworkshop.com/intermediate/290_argocd/install/" rel="noopener noreferrer"&gt;EKS Workshop&lt;/a&gt; documentation.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Set Your GitHub Repository as a Source:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; After installing ArgoCD, you need to set up your GitHub repository as a source for your application deployment. This typically involves configuring the connection to your repository and defining the source for your ArgoCD application. The specific steps will depend on your setup and requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Create an ArgoCD Application:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;name&lt;/code&gt;: Set the name for your application.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;destination&lt;/code&gt;: Define the destination where your application should be deployed.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;project&lt;/code&gt;: Specify the project the application belongs to.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;source&lt;/code&gt;: Set the source of your application, including the GitHub repository URL, revision, and the path to the application within the repository.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;syncPolicy&lt;/code&gt;: Configure the sync policy, including automatic syncing, pruning, and self-healing.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Access your Application :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  To Access the app make sure port 30007 is open in your security group and then open a new tab paste your NodeIP:30007, your app should be running.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Phase 7: Cleanup&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Cleanup AWS EC2 Instances:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Terminate AWS EC2 instances that are no longer needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates a complete DevSecOps pipeline by securely building, testing, and deploying a Netflix Clone on AWS using Jenkins, Docker, SonarQube, and Prometheus-Grafana monitoring. It showcases practical cloud automation and security best practices while strengthening your skills in CI/CD and cloud-native deployments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devsecops</category>
      <category>devops</category>
      <category>netflixclone</category>
    </item>
    <item>
      <title>How to Host a Static Website on AWS S3 Using Terraform: Step-by-Step Guide for DevOps Beginners.</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Tue, 08 Jul 2025 14:43:08 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/how-to-host-a-static-website-on-aws-s3-using-terraform-step-by-step-guide-for-devops-beginners-4j67</link>
      <guid>https://forem.com/nikhilraj-2003/how-to-host-a-static-website-on-aws-s3-using-terraform-step-by-step-guide-for-devops-beginners-4j67</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As a DevOps beginner, deploying your first static website using &lt;strong&gt;Terraform on AWS S3&lt;/strong&gt; is one of the best hands-on projects to understand &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;, AWS services, and real-world DevOps workflows.&lt;/p&gt;

&lt;p&gt;In this guide, you will learn &lt;strong&gt;how to automate S3 bucket creation, configure it for static website hosting, and upload your website files using Terraform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pgs0o1nbj6pscmlu7sh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pgs0o1nbj6pscmlu7sh.gif" alt="workflow of the Project" width="2024" height="1203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is an &lt;strong&gt;open-source Infrastructure as Code (IaC) tool&lt;/strong&gt; created by HashiCorp that allows you to &lt;strong&gt;define, provision, and manage cloud infrastructure using code&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Terraform setup : &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  An AWS account.&lt;/li&gt;
&lt;li&gt;  Terraform installed on your system( Either on VS code or on EC2 instance).&lt;/li&gt;
&lt;li&gt;  AWS CLI configured (&lt;code&gt;aws configure&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  Basic HTML files (&lt;code&gt;index.html&lt;/code&gt;, &lt;code&gt;style.css&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Why this project?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Get hands-on with &lt;strong&gt;Terraform basics.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  Understand &lt;strong&gt;S3 static website hosting&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  Automate infrastructure, avoiding manual AWS console clicks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create Your Terraform Project
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Create  where all the details regarding the project is present .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "bucket" {
  bucket = var.bucket_name
}
resource "aws_s3_bucket_ownership_controls" "bucket" {
  bucket = aws_s3_bucket.bucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}
resource "aws_s3_bucket_public_access_block" "bucket" {
  bucket = aws_s3_bucket.bucket.id
  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}
resource "aws_s3_bucket_acl" "bucket" {
  depends_on = [    aws_s3_bucket_ownership_controls.bucket,
    aws_s3_bucket_public_access_block.bucket
  ]
  bucket = aws_s3_bucket.bucket.id
  acl    = "public-read"
}
resource "aws_s3_bucket_policy" "public_read" {
  bucket = aws_s3_bucket.bucket.id
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [      {
        Effect = "Allow",
        Principal = "*",
        Action = "s3:GetObject",
        Resource = "${aws_s3_bucket.bucket.arn}/*"
      }
    ]
  })
}
resource "aws_s3_object" "index" {
  bucket       = aws_s3_bucket.bucket.id
  key          = "index.html"
  source       = "index.html"
  content_type = "text/html"
}
resource "aws_s3_object" "style" {
  bucket       = aws_s3_bucket.bucket.id
  key          = "style.css"
  source       = "style.css"
  content_type = "text/css"
}
resource "aws_s3_object" "error" {
  bucket       = aws_s3_bucket.bucket.id
  key          = "error.html"
  source       = "error.html"
  content_type = "text/html"
}
resource "aws_s3_bucket_website_configuration" "website" {
  bucket = aws_s3_bucket.bucket.id
  index_document {
    suffix = "index.html"
  }
  error_document {
    key = "error.html"
  }
  depends_on = [aws_s3_bucket_acl.bucket]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;create  , where all the variables are present .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "bucket_name" {
  default = "myterraformbucket0019"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;create  , which is used to mention which &lt;strong&gt;cloud Provider&lt;/strong&gt; we are using for this project .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.10.0"
    }
  }
}
provider "aws" {
  region = "ap-south-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create 
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "websiteendpoint" {
    value = "aws_s3_bucket.bucket.website_endpoint"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Add Website Files
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Create index.html and style.css for the making of the website .&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;index.html&lt;/strong&gt; : &lt;a href="https://github.com/NikhilRaj-2003/terraform-s3-static-website-hosting/blob/main/design/index.html" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/terraform-s3-static-website-hosting/blob/main/design/index.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;style.css&lt;/strong&gt; : &lt;a href="https://github.com/NikhilRaj-2003/terraform-s3-static-website-hosting/blob/main/design/style.css" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/terraform-s3-static-website-hosting/blob/main/design/style.css&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Initialize and Deploy
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;code&gt;terraform init&lt;/code&gt; : &lt;strong&gt;initializes your Terraform working directory&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  It &lt;strong&gt;downloads the required provider plugins&lt;/strong&gt; (like AWS, Azure) needed to work with your cloud resources.&lt;/li&gt;
&lt;li&gt;  It &lt;strong&gt;sets up the backend configuration&lt;/strong&gt; (where your state file will be stored, like local or S3).&lt;/li&gt;
&lt;li&gt;  It prepares the &lt;code&gt;.terraform&lt;/code&gt; folder inside your project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0bahnhgmzkgwdbh0vlr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0bahnhgmzkgwdbh0vlr.png" alt="terraform init" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt;: Used to &lt;strong&gt;preview changes&lt;/strong&gt; Terraform will make before applying .Reads your &lt;strong&gt;Terraform configuration files&lt;/strong&gt; (&lt;code&gt;.tf&lt;/code&gt; files).&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Checks the &lt;strong&gt;current state&lt;/strong&gt; of your infrastructure.&lt;/li&gt;
&lt;li&gt;  Generates a &lt;strong&gt;detailed execution plan&lt;/strong&gt; showing actions needed to match your desired configuration.&lt;/li&gt;
&lt;li&gt;  Indicates &lt;strong&gt;which resources will be created, updated, or destroyed&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  Helps you &lt;strong&gt;review and verify expected changes&lt;/strong&gt; safely.&lt;/li&gt;
&lt;li&gt;  Reduces the chances of &lt;strong&gt;accidental modifications&lt;/strong&gt; to your infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78avusrixo2edount6c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78avusrixo2edount6c0.png" alt="terraform plan" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;terraform apply&lt;/code&gt;: Used to &lt;strong&gt;execute actions&lt;/strong&gt; required to reach the desired state of your infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Reads your &lt;strong&gt;Terraform configuration files&lt;/strong&gt; (&lt;code&gt;.tf&lt;/code&gt; files).&lt;/li&gt;
&lt;li&gt;  Runs &lt;strong&gt;after reviewing the plan&lt;/strong&gt; shown by &lt;code&gt;terraform plan&lt;/code&gt; .&lt;/li&gt;
&lt;li&gt;  Automatically &lt;strong&gt;creates, updates, or deletes resources&lt;/strong&gt; on your cloud provider.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Prompts for confirmation&lt;/strong&gt; before proceeding (unless using &lt;code&gt;-auto-approve&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After giving terraform apply command it creates S3-Bucket in AWS Account , without opening AWS .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgptzf6lso8wb90r0f3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgptzf6lso8wb90r0f3d.png" alt="S3 — Bucket ." width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Access Your Website
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Go to your AWS S3 bucket.&lt;/li&gt;
&lt;li&gt;  Go to &lt;strong&gt;Properties → Static Website Hosting&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  Copy the &lt;strong&gt;bucket website endpoint URL&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  Open in your browser and see your Terraform-hosted static website live.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxqkgkmqfv6zvzrw1wi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxqkgkmqfv6zvzrw1wi6.png" alt="Website — Hosted using Terraform ." width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What You Learned
&lt;/h1&gt;

&lt;p&gt;1.Hosting a static website on AWS S3 using Terraform.&lt;br&gt;
2.Automating bucket creation, ACL, and public read policies using IaC.&lt;br&gt;
3.Practical DevOps workflow for portfolio-building.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Through this project, you have learned how to &lt;strong&gt;host a static website on AWS S3 using Terraform&lt;/strong&gt;, enabling you to understand and apply the principles of &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;. You automated the creation of an S3 bucket, configured it for static website hosting, and managed public access using Terraform, eliminating manual steps in the AWS console.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cloudcomputing</category>
      <category>s3bucket</category>
      <category>devops</category>
    </item>
    <item>
      <title>Secure and Simple: Enabling Passwordless SSH Login on Linux Servers</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Mon, 23 Jun 2025 17:43:58 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/secure-and-simple-enabling-passwordless-ssh-login-on-linux-servers-17i</link>
      <guid>https://forem.com/nikhilraj-2003/secure-and-simple-enabling-passwordless-ssh-login-on-linux-servers-17i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A620%2Fformat%3Awebp%2F0%2Agz66R1Km6W3eRbsf" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A620%2Fformat%3Awebp%2F0%2Agz66R1Km6W3eRbsf" alt="SSH Connection" width="310" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;When managing multiple Linux servers, frequent SSH logins can become repetitive and time-consuming — especially when automating tasks, running scripts remotely, or setting up services like Ansible. A passwordless SSH connection offers a secure and convenient way to streamline server communication without compromising on security.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk through how to set up passwordless SSH authentication between two Linux servers using &lt;strong&gt;SSH key-based authentication&lt;/strong&gt;. This method eliminates the need to manually enter a password every time you connect from one server to another. Instead, it uses a public-private key pair to authenticate access securely.&lt;/p&gt;

&lt;p&gt;Whether you’re a system administrator, DevOps engineer, or just someone managing a few Linux boxes, this setup will make your workflow more efficient and secure.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is SSH?
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;SSH (Secure Shell)&lt;/strong&gt; is a &lt;strong&gt;cryptographic network protocol&lt;/strong&gt; used to securely connect to remote systems over an unsecured network. It enables users to access and control remote machines, typically via a command-line interface, while encrypting all communications to protect against eavesdropping, tampering, and impersonation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61wgfwrpo3z1fve6ze0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61wgfwrpo3z1fve6ze0p.png" alt="SSH key Exchange" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Key Features:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Encrypted communication&lt;/li&gt;
&lt;li&gt;  Remote command execution&lt;/li&gt;
&lt;li&gt;  Tunnel creation for port forwarding&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Two Linux servers (Server A and Server B).&lt;/li&gt;
&lt;li&gt;  Administrative access to both servers.&lt;/li&gt;
&lt;li&gt;  AWS Account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1 : Create 2 Linux Servers from EC2&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;1 . Provide a name for the instance.&lt;/p&gt;

&lt;p&gt;2 . Give number of instances as &lt;strong&gt;2&lt;/strong&gt;, because we are gonna need 2 servers to interact with and without 2 servers the passwordless connection won’t be possible.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select Amazon Linux 2023 AMI or Ubuntu AMI based on your convenience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the required key pair.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Launch Instance&lt;/strong&gt; , after setting up the instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9mcm1smwrgo5wbhtve8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9mcm1smwrgo5wbhtve8.jpeg" alt="Instance creation" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2 : Generate SSH Key Pair on Server-A&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Connect the server1 using SSH or Physical accesss.&lt;/li&gt;
&lt;li&gt; Provide a name for Server-A as Server1 using the command
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo hostnamectl set-hostname server1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Generate an SSH key pair by running the following command on Server1, then &lt;strong&gt;Click enter&lt;/strong&gt; 3 times for confirming the generation of the key — pair.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b3tb3rnwbkqat62ekyr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b3tb3rnwbkqat62ekyr.jpeg" alt="key generation (public and private key)" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After generating the key pair on server1 , there would be 2 key-pairs known as &lt;strong&gt;Public-key (id_rsa.pub)&lt;/strong&gt; and &lt;strong&gt;Private_key (id_rsa)&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view the content inside of both the keys , then enter the following command :&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vvx4uspo9r1lichb4e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vvx4uspo9r1lichb4e6.png" alt="viewing the content inside of the key-pairs" width="267" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Copy the Public key into the Server B&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Change the hostname of the Server B to Server2 using the above shown command .&lt;/li&gt;
&lt;li&gt; Copy the Public key content and paste it into the Server2. The Public key must be pasted into a authorized_keys which allows passwordless SSH from the machine which holds the corresponding &lt;strong&gt;Private-key.&lt;/strong&gt; The command is :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "public key of server1" &amp;gt;&amp;gt; authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The correct example of the above command is given below :&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoghth3sp5jf895knb2g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoghth3sp5jf895knb2g.jpeg" alt="Pasting the public key into server2" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After pasting the &lt;strong&gt;Public-key&lt;/strong&gt; of server1 into server2 , then go to server1 for accessing server2 from server1.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4 : Test the Passwordless Connection&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Attempt to access Server2 from Server1 :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In order to access server 2 from server1 you need to enter or run a command on server1(&lt;strong&gt;Server A&lt;/strong&gt;):&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbavz0vrt1dhaxoagjtrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbavz0vrt1dhaxoagjtrm.png" alt="Command" width="431" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;So the actual command of accessing the server2 from server1 and the command must be run on server1 itself or else the connection would not happen. Run &lt;strong&gt;(ssh &lt;a href="mailto:ec2-user@13.203.67.192"&gt;ec2-user@13.203.67.192&lt;/a&gt;)&lt;/strong&gt; this on server1, &lt;em&gt;13.203.67.192&lt;/em&gt; is the public IP of Server 2 .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febk0lqbgwf254uqe40se.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febk0lqbgwf254uqe40se.jpeg" alt="Accesing server2 from server1" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Troubleshooting Tips
&lt;/h1&gt;

&lt;p&gt;If the passwordless connection doesn’t work, here are some troubleshooting steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check Permissions: Ensure the &lt;code&gt;.ssh&lt;/code&gt; directory on both Server A and Server B has the correct permissions. It should be owned by the user and have restricted permissions:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 700 ~/.ssh 
chmod 600 ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Key File Names: Verify that you are using the default key names (&lt;code&gt;id_rsa&lt;/code&gt; and &lt;code&gt;id_rsa.pub&lt;/code&gt;) or the names you specified during key generation.&lt;/li&gt;
&lt;li&gt; SSH Agent: Make sure you have added the private key to the SSH agent on Server A using &lt;code&gt;ssh-add&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-add ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up a &lt;strong&gt;passwordless SSH connection between two Linux servers&lt;/strong&gt; is a simple yet powerful way to streamline secure access, automate tasks, and improve system administration efficiency. By generating an SSH key pair and copying the public key to the remote server, you eliminate the need to enter a password each time you connect — without compromising on security.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>security</category>
      <category>userconnection</category>
      <category>privateandpublickey</category>
    </item>
    <item>
      <title>Python vs Bash Scripting: Differences, Advantages &amp; When to Use Each</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Sat, 21 Jun 2025 08:16:16 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/python-vs-bash-scripting-differences-advantages-when-to-use-each-5cc2</link>
      <guid>https://forem.com/nikhilraj-2003/python-vs-bash-scripting-differences-advantages-when-to-use-each-5cc2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“ &lt;strong&gt;Should I write this script in Python or Bash?&lt;/strong&gt; “&lt;br&gt;
That one question has haunted developers and DevOps engineers alike. On the surface, both get the job done — but under the hood, they’re built for different worlds. In this blog, we’ll break down the real-world &lt;strong&gt;differences between Bash and Python scripting&lt;/strong&gt;, their &lt;strong&gt;pros&lt;/strong&gt; and most importantly — &lt;strong&gt;when you should use each.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What Is Scripting?
&lt;/h2&gt;

&lt;p&gt;Scripting, at its core, is about giving your computer a to-do list — a set of instructions it can follow automatically, step by step. Think of it like writing a recipe: instead of telling a person how to cook a dish, you’re telling the computer how to carry out a task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Scripting Matters ?
&lt;/h2&gt;

&lt;p&gt;Let’s be honest — nobody enjoys doing the same repetitive tasks every day. Whether it’s moving files, cleaning up logs, or setting up your dev environment for the 10th time this week, it gets old fast. That’s where scripting comes in — and it’s a total game changer.&lt;/p&gt;

&lt;p&gt;Scripting is like giving your computer a checklist and saying, “You handle this. I’ve got better things to do.” Once you write a script, it takes over the boring stuff — no complaints, no forgetfulness, just results. Scripting helps you reduce errors, save time, and focus on the stuff that actually matters. Trust me the moment you start automating even the smallest tasks, you’ll wonder how you ever lived without it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Some of the most popular scripting languages include:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Bash&lt;/strong&gt; for system tasks on Unix/Linux.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Python&lt;/strong&gt; for more complex automation and cross-platform scripting.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JavaScript&lt;/strong&gt; for client-side browser scripting.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;PowerShell&lt;/strong&gt; for automation in Windows environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Bash Scripting — The Command-Line Ninja
&lt;/h1&gt;

&lt;p&gt;Bash (&lt;strong&gt;&lt;em&gt;Bourne Again Shell&lt;/em&gt;&lt;/strong&gt;) is the default shell on most Linux distributions and macOS. It’s designed to interact directly with the operating system. Think of Bash as a glue that connects other CLI tools together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dhmkyfcs3u0r9j6vstm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dhmkyfcs3u0r9j6vstm.png" alt="Bash Scripting" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes Bash So Special?
&lt;/h2&gt;

&lt;p&gt;Bash isn’t just that black box you type commands into — it’s much more than that. It’s like your backstage pass to the entire operating system. With Bash, you’re not just running commands, you’re &lt;em&gt;connecting&lt;/em&gt; them, &lt;em&gt;chaining&lt;/em&gt; them, and &lt;em&gt;automating&lt;/em&gt; them like a pro.&lt;/p&gt;

&lt;p&gt;Think of Bash as your personal assistant for the command line. You can write a small script to do boring, repetitive things — like moving files, cleaning up folders, or checking system health — and Bash will handle it all for you, without breaking a sweat.&lt;/p&gt;

&lt;p&gt;The real magic? Bash lets you glue together tons of other tools — like &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;find&lt;/code&gt;, and &lt;code&gt;curl&lt;/code&gt;. On their own, these tools are powerful. But with Bash, you can make them work together like a well-rehearsed orchestra. One command filters, another searches, another renames — and Bash makes it all flow smoothly in just a few lines of script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Bash Scripting
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Perfect for interacting with the OS, managing files, users, permissions, services, etc.&lt;/li&gt;
&lt;li&gt;Executes quickly with minimal overhead — ideal for short scripts or quick fixes.&lt;/li&gt;
&lt;li&gt;Easily connects tools like &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;find&lt;/code&gt;, etc. in one-liners or scripts.&lt;/li&gt;
&lt;li&gt;No need for setup — just open the terminal and start scripting.&lt;/li&gt;
&lt;li&gt;Bash is often the go-to choice for scheduled tasks and sysadmin routines.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Few Common Bash Commands :
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;ls&lt;/code&gt; — it is a command that is used to List Directory , Files and also Folders in long format (with all the permissions , Date and size)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -al
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;cd — it is a command which is used to change a directory or also move from one directory to an another .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;cd ..&lt;/code&gt; to go up one level, or &lt;code&gt;cd&lt;/code&gt; alone to return to your home directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;touch&lt;/code&gt; — this is used to create a New Empty File . For example shown below it creates a text file called &lt;em&gt;report.txt.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch report.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;mkdir&lt;/code&gt; — its the most commonly used command when your required to make a Directory because without making a directory you can’t survive. Now in the below example it creates a (&lt;em&gt;dir&lt;/em&gt;) called &lt;strong&gt;projects.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir projects
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;rm&lt;/code&gt; — used to remove Files or Directories recursively and forcefully . But be carefull because there’s &lt;strong&gt;no undo.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm -rf old_file.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;cp&lt;/code&gt; — this is used to Copy Files or Folders into your desired location or directory. Use &lt;code&gt;-r&lt;/code&gt; for copying directories: &lt;code&gt;cp -r source/ dest/&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp file.txt backup.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;mv&lt;/code&gt; — &lt;em&gt;Moves&lt;/em&gt; &lt;code&gt;_data.csv_&lt;/code&gt; &lt;em&gt;into the&lt;/em&gt; &lt;code&gt;_archive_&lt;/code&gt; &lt;em&gt;folder&lt;/em&gt;. You can also use it to rename: &lt;code&gt;mv oldname.txt newname.txt&lt;/code&gt; .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv data.csv archive/data.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;echo&lt;/code&gt; — commanly used to display the content or Print something in the Terminal. For an example &lt;em&gt;“Hello, World!”&lt;/em&gt; would be printed onto the screen.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Hello, World!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;cat&lt;/code&gt; — this command is used to view File Contents with opening the file itself .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat notes.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;grep&lt;/code&gt; — command mainly used for searching or matching. Used for Text in files, file names present inside a directory.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep "error" logfile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Python Scripting — The Swiss Army Knife of Automation
&lt;/h2&gt;

&lt;p&gt;Python wasn’t built solely for scripting, but it’s one of the best tools out there when it comes to getting things done efficiently. It’s like that reliable friend who somehow knows how to fix your Wi-Fi, automate your spreadsheet, and build a website — all before lunch. The beauty of Python lies in its readability and simplicity. You don’t need to write 20 lines of code to do something basic. Want to rename 500 files? Scrape data from a website? Monitor a folder for changes? Python makes all of that feel incredibly straightforward.&lt;/p&gt;

&lt;p&gt;And thanks to its massive library ecosystem — from &lt;code&gt;os&lt;/code&gt; and &lt;code&gt;shutil&lt;/code&gt; for file handling, to &lt;code&gt;requests&lt;/code&gt; for working with APIs, to &lt;code&gt;pandas&lt;/code&gt; for data wrangling — you rarely start from scratch. It’s versatile enough to automate daily tasks, yet powerful enough to build entire applications. Whether you’re a beginner writing your first script or a pro building robust automation pipelines, Python is the kind of language that scales with you — and always has your back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AIb65smpKxJ5hH1-W" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AIb65smpKxJ5hH1-W" alt="Python Scripting" width="918" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes Python So Special?
&lt;/h2&gt;

&lt;p&gt;Python is special because it’s simple, powerful, and insanely versatile. The code reads like plain English, so it’s easy to learn and easy to remember. Whether you’re automating tasks, building websites, crunching data, or diving into AI — Python can handle it all. Plus, with thousands of libraries, there’s a tool for pretty much anything you want to do. It’s the kind of language that grows with you, no matter where you start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Python Scripting
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Clean syntax that feels like English — great for beginners and large teams.&lt;/li&gt;
&lt;li&gt; From file handling to web scraping to machine learning — there’s a library for almost everything.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perfect for logic-heavy tasks, data manipulation, API integration, and beyond.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python scripts run smoothly on Windows, macOS, and Linux.&lt;br&gt;
You can start with a simple script and grow it into a full-blown application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Few Common Python Commands :
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;print()&lt;/code&gt; — command used to displays text or variables on the screen.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("Hello, world!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;input()&lt;/code&gt; — commonly used to take input from the user_._
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name = input("What's your name? ")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;len()&lt;/code&gt; — Returns the length of a string, list, or other data types.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;len("Python")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;type()&lt;/code&gt; — Tells you the data type (e.g., int, str, list).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type(42)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;range()&lt;/code&gt; —Generates a sequence of numbers, often used in loops_._
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in range(5):
    print(i)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;if&lt;/code&gt;, &lt;code&gt;elif&lt;/code&gt;, &lt;code&gt;else&lt;/code&gt; — command widely used for decision-making in your script. It’s outcome solely depends on the conditions.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if age &amp;lt; 18:
    print("Minor")
else:
    print("Adult")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;def&lt;/code&gt; — Used to define functions (reusable blocks of code).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def greet():
    print("Hello!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;import&lt;/code&gt; — Lets you use built-in or used to extract the external modules
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import math
print(math.sqrt(25))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;list.append()&lt;/code&gt; — Adds an item to the end of a list_._
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fruits = ["apple", "banana"]
fruits.append("orange")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;open()&lt;/code&gt; — Opens a file for reading or writing
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file = open("data.txt", "r")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Python vs Bash — Side-by-Side Example
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Task: List all&lt;/strong&gt; &lt;code&gt;**.log**&lt;/code&gt; &lt;strong&gt;files and count how many lines contain the word “error”&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Bash Script:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
for file in *.log; do
  echo "$file: $(grep -i error "$file" | wc -l) error(s)"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Python Script:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
import glob
for file in glob.glob("*.log"):
    with open(file, "r") as f:
        count = sum(1 for line in f if "error" in line.lower())
    print(f"{file}: {count} error(s)")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Observation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Bash is concise, efficient, and perfect for file processing.&lt;/li&gt;
&lt;li&gt;  Python is clearer, easier to maintain, and handles edge cases more gracefully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use Bash vs Python: The Right Tool for the Right Task
&lt;/h2&gt;

&lt;p&gt;Let’s be real — when you’re diving into scripting, it’s not about which language is better. It’s about &lt;strong&gt;which one makes your life easier for the task you’re tackling&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you’re working closely with the &lt;strong&gt;Linux terminal&lt;/strong&gt;, Bash is often your best friend. It’s great for those quick-and-dirty tasks like moving files around, starting or stopping services, scheduling cron jobs, or stringing together commands with pipes. Bash is fast, lightweight, and made for interacting with the shell. It really shines in &lt;strong&gt;DevOps work&lt;/strong&gt;, like managing EC2 instances, running shell scripts during deployments, or automating things through AWS CLI.&lt;/p&gt;

&lt;p&gt;Now, if your task involves &lt;strong&gt;more logic or data crunching&lt;/strong&gt;, Python is the way to go. Need to parse a massive log file? Read and write JSON or CSV? Call APIs? Handle errors gracefully and keep your script maintainable? Python does all that and more. It’s clean, powerful, and has a huge set of libraries that make complex tasks feel simple. It’s also great if your script might evolve into something bigger over time — like a command-line tool, automation framework, or even a web service.&lt;/p&gt;

&lt;p&gt;Sometimes, though, the smartest move is to use &lt;strong&gt;both&lt;/strong&gt;. For example, you might use a Bash script to keep an eye on your system, and then let Python jump in when there’s real work to do — like processing data or sending out a notification. It’s a powerful combo: Bash handles the grunt work, Python brings the brains.&lt;/p&gt;

&lt;p&gt;So here’s the bottom line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Bash&lt;/strong&gt; when you’re doing quick shell-level stuff.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Python&lt;/strong&gt; when your logic gets heavier or your task gets smarter.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Choosing between Bash and Python isn’t about picking a winner — it’s about using the right tool for the job. &lt;strong&gt;Bash&lt;/strong&gt; is unbeatable for quick, low-level system tasks and chaining CLI commands like a pro. &lt;strong&gt;Python&lt;/strong&gt; steps in when your scripts need structure, logic, or cross-platform flexibility. In reality, the best automation setups often use &lt;strong&gt;both&lt;/strong&gt;, playing to each of their strengths. So instead of asking &lt;em&gt;“Which one should I learn?”&lt;/em&gt;, ask &lt;em&gt;“When should I use which?”&lt;/em&gt; Master both, and you won’t just write scripts — you’ll build smart, elegant solutions that actually make your life easier.&lt;/p&gt;

</description>
      <category>bash</category>
      <category>python</category>
      <category>bashvspython</category>
      <category>programming</category>
    </item>
    <item>
      <title>Automatically Create Timestamped S3 Buckets with AWS Lambda and Send Gmail Alerts via SNS</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Thu, 19 Jun 2025 15:27:53 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/automatically-create-timestamped-s3-buckets-with-aws-lambda-and-send-gmail-alerts-via-sns-168o</link>
      <guid>https://forem.com/nikhilraj-2003/automatically-create-timestamped-s3-buckets-with-aws-lambda-and-send-gmail-alerts-via-sns-168o</guid>
      <description>&lt;p&gt;Learn how to automatically create timestamped S3 buckets using AWS Lambda and get instant email alerts via Amazon SNS to your Gmail. Step-by-step guide for automation on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon S3 (Simple Storage Service) is a powerful cloud storage solution provided by AWS that allows users to store and retrieve any amount of data at any time. In many real-world scenarios, organizations require automation to manage data more effectively. One such use case is the automated creation of new S3 buckets with timestamps whenever a new object is uploaded to an existing bucket. This approach not only helps in organizing data chronologically but also enables better tracking and archiving of uploads.&lt;/p&gt;

&lt;p&gt;To further enhance this automation, integrating email notifications ensures that users or administrators are immediately alerted when an upload occurs. This is accomplished by leveraging AWS services such as &lt;strong&gt;S3 Event Notifications&lt;/strong&gt;, &lt;strong&gt;Lambda Functions&lt;/strong&gt;, &lt;strong&gt;SNS (Simple Notification Service)&lt;/strong&gt;, and &lt;strong&gt;CloudWatch&lt;/strong&gt;. The Lambda function responds to upload events in the source bucket, dynamically creates a new bucket with a timestamped name, and triggers an SNS topic to send an email alert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw5izo5zki2wca9xe2o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw5izo5zki2wca9xe2o0.png" alt="Simple Workflow of Procedure" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Simple Storage Service (S3)?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon &lt;strong&gt;Simple Storage Service&lt;/strong&gt; (Amazon S3) is a scalable, high-speed, web-based cloud storage service designed to store and retrieve any amount of data at any time. Launched by Amazon Web Services (AWS), S3 is widely used for a variety of purposes, including backup and restore, content distribution, disaster recovery, and data archiving.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Simple Notification Service(SNS)?
&lt;/h2&gt;

&lt;p&gt;Amazon &lt;strong&gt;Simple Notification Service (SNS)&lt;/strong&gt; is a fully managed messaging service provided by AWS that allows you to send notifications to a large number of recipients or systems. It provides a highly scalable, reliable, and cost-effective solution for sending messages, alerts, and notifications to end-users or other applications. SNS enables the creation and management of &lt;strong&gt;topics&lt;/strong&gt;, to which subscribers can subscribe to receive messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Lambda ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt; is a fully managed serverless computing service provided by Amazon Web Services (AWS) that lets you run your code in response to events without provisioning or managing servers. With Lambda, you can write code to process events such as file uploads, database changes, or HTTP requests, and AWS automatically manages the infrastructure required to run that code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;To get started with this project , you need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  AWS Account&lt;/li&gt;
&lt;li&gt;  Gmail Account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1 : Go to S3 and create a S3 Bucket
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foow8zjjzzrekkqwyzzfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foow8zjjzzrekkqwyzzfq.png" alt="captionless image" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Select &lt;strong&gt;General Purpose&lt;/strong&gt; as Bucket-type.&lt;/li&gt;
&lt;li&gt; Provide a name for the S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnxjtk5c1lcth8sayocw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnxjtk5c1lcth8sayocw.png" alt="creating a S3 bucket" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Untick the Block all public access, and also enable &lt;strong&gt;bucket-versioning .&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable Bucket-key and Click &lt;strong&gt;create bucket .&lt;/strong&gt; Then a bucket will be created under the name you have provided .&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ykk38kpxojk4ulfabw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ykk38kpxojk4ulfabw.png" alt="captionless image" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2 : Provide the bucket policy of the S3 bucket&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Once the S3 bucket is created , Go to &lt;strong&gt;Permissions&lt;/strong&gt; to provide the S3 bucket policy .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8xbk7oe8kb0zc42nssn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8xbk7oe8kb0zc42nssn.png" alt="captionless image" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Permissions&lt;/strong&gt; you will find &lt;strong&gt;bucket-policy ,&lt;/strong&gt; click on &lt;strong&gt;Edit&lt;/strong&gt; to provide the bucket policy. The policy must be written using JSON.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::bucket-genie/*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Then after the policy is given inside the bucket policy , click on &lt;strong&gt;save changes&lt;/strong&gt; to update the changes in the S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3 : Go to Lambda and create a Lambda function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57n36ya8vpemv0ldswbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57n36ya8vpemv0ldswbo.png" alt="creating lambda function" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Click on &lt;strong&gt;“create a function”&lt;/strong&gt; , Select an option called “&lt;strong&gt;author&lt;/strong&gt; &lt;strong&gt;from Scratch”.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for your function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffobrapthzrijn0r09vm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffobrapthzrijn0r09vm6.png" alt="captionless image" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;python 3.13&lt;/strong&gt; as lambda runtime , and &lt;strong&gt;x86_64&lt;/strong&gt; as the &lt;strong&gt;Architecture.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5778900mwttks4ehabyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5778900mwttks4ehabyw.png" alt="captionless image" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After selecting the runtime and architecture , click on &lt;strong&gt;create function&lt;/strong&gt; a lambda function will be created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the function is created you can see a image like this with the function name on the top.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxs76vkrifhtat1jvwgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxs76vkrifhtat1jvwgi.png" alt="captionless image" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under code section provide the Python code which is the main key for the project .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from datetime import datetime
import urllib.parse
s3 = boto3.client('s3', region_name='ap-south-1')
sns = boto3.client('sns', region_name='ap-south-1')
# Fixed bucket name with timestamp - generated only once
BUCKET_NAME = 'bucket-genie-backup-20250509'
REGION = 'ap-south-1'
SNS_TOPIC_ARN = 'arn:aws:sns:ap-south-1:530424100396:S3'
def bucket_exists(bucket_name):
    try:
        s3.head_bucket(Bucket=bucket_name)
        return True
    except s3.exceptions.ClientError:
        return False
def create_bucket(bucket_name):
    s3.create_bucket(
        Bucket=bucket_name,
        CreateBucketConfiguration={'LocationConstraint': REGION}
    )
    print(f" Created bucket: {bucket_name}")
def copy_to_bucket(source_bucket, object_key, destination_bucket):
    copy_source = {'Bucket': source_bucket, 'Key': object_key}
    s3.copy_object(
        Bucket=destination_bucket,
        CopySource=copy_source,
        Key=object_key
    )
    print(f" Copied {object_key} from {source_bucket} to {destination_bucket}")
def send_sns_notification(bucket_name, created_time):
    message = f"""Hello user,
This is to inform you that a new S3 bucket has been created for backup purposes.
🔹 Bucket Name: {bucket_name}
🔹 Region: {REGION}
🔹 Created At (UTC): {created_time}
All future uploads will be backed up to this bucket.
Thank You,
AWS Lambda Backup System
"""
    sns.publish(
        TopicArn=SNS_TOPIC_ARN,
        Subject=' S3 Backup Bucket Created',
        Message=message
    )
    print("📧 Email notification sent via SNS.")
def lambda_handler(event, context):
    source_bucket = event['Records'][0]['s3']['bucket']['name']
    object_key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'])
    try:
        is_new = False
        if not bucket_exists(BUCKET_NAME):
            create_bucket(BUCKET_NAME)
            is_new = True
        copy_to_bucket(source_bucket, object_key, BUCKET_NAME)
        if is_new:
            created_time = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')
            send_sns_notification(BUCKET_NAME, created_time)
        return {
            'statusCode': 200,
            'body': f"Backup complete. Object '{object_key}' copied to '{BUCKET_NAME}'."
        }
    except Exception as e:
        print(f" Error: {str(e)}")
        raise e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;After adding the code you need to test and deploy the code to make sure the code is running perfectly without any errors .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdarss2yji6z63ea0o9lx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdarss2yji6z63ea0o9lx.png" alt="captionless image" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on &lt;strong&gt;Add trigger&lt;/strong&gt; , so that whenever an object is been uploaded into the S3 bucket then it automatically triggers the lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y28ca3xnwevrcor46ro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y28ca3xnwevrcor46ro.png" alt="captionless image" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Source&lt;/strong&gt; as S3 , under bucket section select the bucket that you have created . Then click on &lt;strong&gt;Add&lt;/strong&gt; to add the trigger into lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9l60wj7dlauqrs1qmr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9l60wj7dlauqrs1qmr1.png" alt="captionless image" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4 : Go to IAM Roles and Attach the policies.&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Under lambda function you can see &lt;strong&gt;permissions ,&lt;/strong&gt; click on the role name shown below the role name which will redirect to IAM roles and policies .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7mnc5b5b9mmpftpqu4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7mnc5b5b9mmpftpqu4l.png" alt="captionless image" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can see &lt;strong&gt;lambda-genie-role-4ttdny90&lt;/strong&gt; is the role name , click on &lt;strong&gt;Add Permissions&lt;/strong&gt; to attach the policies.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxxuqt3vc8qg9kq5dab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxxuqt3vc8qg9kq5dab.png" alt="captionless image" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for AmazonS3FullAccess and once you find the policy , select that policy and click on &lt;strong&gt;Add permission.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvm2d3auvbdqcx6hmgte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvm2d3auvbdqcx6hmgte.png" alt="captionless image" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5 : Go to SNS and create a SNS topic for Notification&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv09oyf0gnzaftdmrvya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv09oyf0gnzaftdmrvya.png" alt="creating SNS topic" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; click on &lt;strong&gt;next step&lt;/strong&gt; to create a topic , select the topic type as &lt;strong&gt;standard .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; provide the name and Display name for the topic so that when you get notified , it shows by the display name.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq4oz1xjv8i3mwy143n9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq4oz1xjv8i3mwy143n9.png" alt="captionless image" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Then click on &lt;strong&gt;create topic&lt;/strong&gt; to create topic . And once the topic is been create you need to create &lt;strong&gt;subscription&lt;/strong&gt; because the SNS itself doesn’t know the destination point, so we give subscription as &lt;strong&gt;Email , SMS , HTTP/HTTPS endpoint or SQS .&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;while creating subscription under protocol, select the type of platform you need to the message to be delivered . In my case i have selected the &lt;strong&gt;email ,&lt;/strong&gt; so the message will be delivered to my Gmail.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bc5a6i68g4woxloj79n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bc5a6i68g4woxloj79n.png" alt="captionless image" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on &lt;strong&gt;create subscription&lt;/strong&gt; after giving all the details . After creating the subscription a mail would be delivered to your Gmail by which you have registered for the subscription . You need to confirm the subscription by click on it .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiqdt22d4l4owpy29exk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiqdt22d4l4owpy29exk.png" alt="subscription confirmed" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the subscription is been confirmed , you’ll need to click on &lt;strong&gt;publish message&lt;/strong&gt; for publishing the desired message to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpxwydntwg8rui8s12m5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpxwydntwg8rui8s12m5.png" alt="captionless image" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provide the subject if needed , then go to message body and type the desired the message that you want to publish . Then click on &lt;strong&gt;publish Message.&lt;/strong&gt; Remember that &lt;strong&gt;publish message&lt;/strong&gt; is used for publishing the message manually.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcpuzwuxax96r1zbjqvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcpuzwuxax96r1zbjqvo.png" alt="captionless image" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;step 6 : Upload some files into your S3 bucket.&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to your s3 bucket and upload some files or objects into it .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8gstvq0dazs8fcwb50m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8gstvq0dazs8fcwb50m.png" alt="captionless image" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After uploading of the files and objects , &lt;strong&gt;click on upload .&lt;/strong&gt; After clicking on upload you should see a new s3 bucket created with &lt;strong&gt;timestamp.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf0ect94buuxb7wkppos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf0ect94buuxb7wkppos.png" alt="New S3 bucket created" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Let me Breakdown the newly created S3-bucket name.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft25lwpn3c3l00fylgkcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft25lwpn3c3l00fylgkcs.png" alt="Breakdown of the newly created S3 — Bucket" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now go to Gmail , where you will be notified that a new s3 bucket as been created . And this is the final result .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrolkfiquzwk8nt3kma4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrolkfiquzwk8nt3kma4.png" alt="Email Notification of the Newly created bucket" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When a file is uploaded to the &lt;code&gt;bucket-genie&lt;/code&gt;, a Lambda function creates a new timestamped bucket, copies the file, and sends an email notification via SNS. It demonstrates automated event handling, object replication, and alerting in a scalable cloud setup. This approach is useful for real-time monitoring, backups, and alert-driven applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>automation</category>
      <category>json</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Automated Deployment of Java Applications to Apache Tomcat using Jenkins.</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Mon, 16 Jun 2025 18:47:48 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/automated-deployment-of-java-applications-to-apache-tomcat-using-jenkins-k5k</link>
      <guid>https://forem.com/nikhilraj-2003/automated-deployment-of-java-applications-to-apache-tomcat-using-jenkins-k5k</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project aims to automate the deployment of Java applications to an Apache Tomcat server using Jenkins. Instead of manually building and copying &lt;code&gt;.war&lt;/code&gt; files, we use a CI/CD pipeline where Jenkins pulls code from a Git repository, builds it using Maven, and deploys it to Tomcat automatically. This streamlines the development process, reduces human error, and enables faster, more reliable application delivery, following modern DevOps practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp5flq5am7yge1qfain7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp5flq5am7yge1qfain7.png" alt="workflow of the project" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Project?
&lt;/h2&gt;

&lt;p&gt;Manual deployment of Java applications is time-consuming and error-prone. This project solves that by introducing &lt;strong&gt;automation&lt;/strong&gt; using Jenkins, which ensures that every time code is updated, it is &lt;strong&gt;built, tested, and deployed automatically&lt;/strong&gt; to a Tomcat server. It saves developer time, reduces bugs, and supports faster and more reliable software delivery — all of which are essential in modern &lt;strong&gt;DevOps&lt;/strong&gt; and &lt;strong&gt;agile&lt;/strong&gt; environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequsities :
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Tomcat Server&lt;/li&gt;
&lt;li&gt;  Jenkins Server&lt;/li&gt;
&lt;li&gt;  A java Project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tomcat Setup&lt;/strong&gt; : &lt;a href="https://github.com/NikhilRaj-2003/java-web/blob/main/src/main/webapp/Installations%20/apache%20tomcat/Tomcat%20Installation%20on%20Amazon-Linux-2.md" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/java-web/blob/main/src/main/webapp/Installations%20/apache%20tomcat/Tomcat%20Installation%20on%20Amazon-Linux-2.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Jenkins&lt;/strong&gt; : &lt;a href="https://github.com/NikhilRaj-2003/java-web/blob/main/src/main/webapp/Installations%20/Jenkins/Jenkins%20Installation%20on%20Amazon-Linux-2.md" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/java-web/blob/main/src/main/webapp/Installations%20/Jenkins/Jenkins%20Installation%20on%20Amazon-Linux-2.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Java Project&lt;/strong&gt; : &lt;a href="https://github.com/NikhilRaj-2003/java-web.git" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/java-web.git&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step — 1 : Install Git into your Jenkins server
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; install git and check the git version .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Become a root 
sudo su -
# Install git.
yum install git -y
# Check Version of git.
git --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Open Jenkins in web-browser, click on manage jenkins then select Global Tool Configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfgka181h6160rlsxegh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfgka181h6160rlsxegh.jpg" alt="captionless image" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then under Global tool configuration , add git in Git Installations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F686ieojsztxk52cymsw2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F686ieojsztxk52cymsw2.jpg" alt="captionless image" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In git installations , click &lt;strong&gt;Add git&lt;/strong&gt; then select git.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz11ihhx38gozfyf0qa9z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz11ihhx38gozfyf0qa9z.jpg" alt="captionless image" width="800" height="184"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoewi56pzw0j6yxdqtcy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoewi56pzw0j6yxdqtcy.jpg" alt="captionless image" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter the Git name and path , later click &lt;strong&gt;apply and save&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vo22to57x3q7hggmjwq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vo22to57x3q7hggmjwq.jpg" alt="captionless image" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now we have successfully integrated Git with Jenkins . Then click on &lt;strong&gt;Global Tool Configuration ,&lt;/strong&gt; under that go to &lt;strong&gt;Maven installtions .&lt;/strong&gt; After giving the details click on &lt;strong&gt;apply and save .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rjv06rxmi94zde5neh8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rjv06rxmi94zde5neh8.jpg" alt="captionless image" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now &lt;strong&gt;Maven&lt;/strong&gt; is also successfully integrated with jenkins .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Steps — 2 : Install a Plugin called ‘Deploy to container ‘
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;manage jenkins &amp;gt; plugins &amp;gt; Available plugins &amp;gt; Deploy to container .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts618he460ijlu3lum15.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts618he460ijlu3lum15.jpg" alt="captionless image" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;strong&gt;restart jenkins after installations ,&lt;/strong&gt; after the installations the jenkins will restart and you need to enter the username and password .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gq0wzz5m4l88wqzjzkv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gq0wzz5m4l88wqzjzkv.jpg" alt="captionless image" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  step — 3 : Create a Global Credentials
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Credentials &amp;gt; Select system &amp;gt; Select Global Credentials&lt;/strong&gt; then click on &lt;strong&gt;Add Credentials .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidkj74dgj0l7plftgn1x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidkj74dgj0l7plftgn1x.jpg" alt="captionless image" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After clicking on add credentials , Enter the &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb2772522dzpamxlgbve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb2772522dzpamxlgbve.png" alt="captionless image" width="683" height="737"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once your done giviing the details , then you can see the credentails you have created .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg0k1ilduodv899rdn9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg0k1ilduodv899rdn9v.png" alt="captionless image" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step — 4: Create a Jenkins Job
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Head to &lt;strong&gt;Dashboard &amp;gt; New Item ,&lt;/strong&gt; then provide a name for the jenkins job and select &lt;strong&gt;freestyle project&lt;/strong&gt; as project type .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazc6rnymzbk8e9j7aalj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazc6rnymzbk8e9j7aalj.png" alt="captionless image" width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter the &lt;strong&gt;github repo&lt;/strong&gt; and also &lt;strong&gt;change the branch name from master to main ,&lt;/strong&gt; because there is no master branch .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqxbdigtpdun7oze4hty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqxbdigtpdun7oze4hty.png" alt="captionless image" width="800" height="989"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Build steps &amp;gt; invoke top level maven targets ,&lt;/strong&gt; then provide maven command ‘ &lt;strong&gt;clean package’ .&lt;/strong&gt; And there is no need to mention &lt;strong&gt;mvn&lt;/strong&gt; as we give in the terminal , it automatically takes as mvn and then runs the command .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67in5cryivz314d5nn07.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67in5cryivz314d5nn07.jpg" alt="captionless image" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In Post-build Actions click Add Post-build Actions and Select &lt;strong&gt;Deploy war/ear to container.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pe2mzdl0n6vpi9zhtqh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pe2mzdl0n6vpi9zhtqh.jpg" alt="captionless image" width="744" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter the details as following then click on &lt;strong&gt;apply and save.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fhms2jnjacsexnnv4rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fhms2jnjacsexnnv4rn.png" alt="captionless image" width="576" height="769"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now Click on &lt;strong&gt;Build Now&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc8q23pght0aivzfmg43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc8q23pght0aivzfmg43.png" alt="captionless image" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Build is Successfull .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj4t2zrxwjx04jtom7l3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj4t2zrxwjx04jtom7l3.png" alt="captionless image" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output of the project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6j7whda3mc9l5be6wcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6j7whda3mc9l5be6wcj.png" alt="Web application deployed" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why to use Github Webhook ?
&lt;/h2&gt;

&lt;p&gt;Using a &lt;strong&gt;GitHub webhook&lt;/strong&gt; with &lt;strong&gt;Jenkins&lt;/strong&gt; allows you to &lt;strong&gt;automatically trigger a build and deploy&lt;/strong&gt; your Java web application whenever someone pushes code to your GitHub repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step — 5 : Automate Build and Deploy using Github webhook
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Github Account &amp;gt; Settings &amp;gt; Webhook ,&lt;/strong&gt; then create a webhook.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Payload URL is &lt;strong&gt;public ip if jenkins server.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;public ip address&amp;gt;:8080/github-webhook/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Set content type as &lt;strong&gt;application/JSON .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  Select Event Trigger as &lt;strong&gt;just the push event .e&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbcanoy4zh5ngiwyht42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbcanoy4zh5ngiwyht42.png" alt="captionless image" width="800" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then go to &lt;strong&gt;jenkins dashboard &amp;gt; tomcat-jenkins &amp;gt; configure &amp;gt; Triggers,&lt;/strong&gt; there you need to enable the &lt;strong&gt;Github hook trigger for GITscm polling.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5krbbtji8qpeua7efozl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5krbbtji8qpeua7efozl.png" alt="captionless image" width="710" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Whenever there is a commit / changes made in the java web — application , then the webhook will automatically create build . Later click on &lt;strong&gt;Commit .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90sx5sfpp46pwxib61ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90sx5sfpp46pwxib61ja.png" alt="captionless image" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the Changes made when you click on commit , a build will be created in jenkins dashboard automatically where it &lt;strong&gt;builds&lt;/strong&gt; and &lt;strong&gt;deploys&lt;/strong&gt; the web application with (.&lt;em&gt;war )&lt;/em&gt; file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqs1a1knm9wag9m6xvsz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqs1a1knm9wag9m6xvsz.png" alt="captionless image" width="800" height="729"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The changes made in github repository will be automatically changed in the web — application without any manual intervention . Webhook trigger whenever there is a commit in that repository .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3rdh3p309lkyj169zym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3rdh3p309lkyj169zym.png" alt="Final output of web application" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This mini project successfully demonstrates a &lt;strong&gt;CI/CD pipeline&lt;/strong&gt; where a Java web application is automatically &lt;strong&gt;built and deployed to Apache Tomcat using Jenkins&lt;/strong&gt;. By integrating &lt;strong&gt;GitHub Webhooks&lt;/strong&gt;, the pipeline is triggered instantly upon every code push, ensuring that the latest changes are continuously tested and deployed without manual intervention.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>devops</category>
      <category>devopsmaven</category>
      <category>apachetomcat</category>
    </item>
    <item>
      <title>Automate EC2 Start/Stop with AWS Lambda and CloudWatch — Step-by-Step Guide with Alerts</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Sat, 14 Jun 2025 17:46:37 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/automate-ec2-startstop-with-aws-lambda-and-cloudwatch-step-by-step-guide-with-alerts-461h</link>
      <guid>https://forem.com/nikhilraj-2003/automate-ec2-startstop-with-aws-lambda-and-cloudwatch-step-by-step-guide-with-alerts-461h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplq8bqz5j4j2f070yw3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplq8bqz5j4j2f070yw3k.png" alt="Auto — Shutdown EC2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@nikhilsiri2003/smart-cloud-management-auto-start-stop-ec2-instances-and-receive-notifications-3b96514f5116" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In AWS (Amazon Web Services), &lt;strong&gt;Elastic Compute Cloud (EC2)&lt;/strong&gt; instances are widely used for running applications, websites, and services. However, one common issue many users face is forgetting to shut down unused instances — leading to unnecessary billing.&lt;/p&gt;

&lt;p&gt;This project solves that problem by &lt;strong&gt;automatically shutting down idle EC2 instances&lt;/strong&gt; if they remain &lt;strong&gt;inactive for more than 5 minutes&lt;/strong&gt; and &lt;strong&gt;notifying the user instantly&lt;/strong&gt; via email or SMS.&lt;br&gt;
By integrating AWS &lt;strong&gt;CloudWatch&lt;/strong&gt;, &lt;strong&gt;Lambda&lt;/strong&gt;, and &lt;strong&gt;SNS&lt;/strong&gt;, we can build an intelligent, self-managing solution that minimizes costs and ensures no resource goes wasted.&lt;/p&gt;

&lt;p&gt;Whether you’re a solo developer, startup, or enterprise team, this mini project is a &lt;strong&gt;simple but powerful way to automate your cloud operations&lt;/strong&gt; and improve cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30vhnmgwzvcac5u3jaoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30vhnmgwzvcac5u3jaoy.png" alt="Simple Architecture of the procedure" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Lambda ??&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt; is a serverless compute service that allows you to run code without provisioning or managing servers. You simply upload your code, set up a trigger (like an alarm or event), and AWS automatically runs the code for you when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is EC2 in AWS ?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 (Elastic Compute Cloud)&lt;/strong&gt; is a web service that provides resizable computing capacity in the cloud. Think of EC2 as a virtual computer where you can run applications, host websites, or perform any tasks you would normally do on a physical server.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is SNS in AWS ?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon SNS (Simple Notification Service)&lt;/strong&gt; is a fully managed messaging service that enables you to send notifications to users or other systems. It can send messages via email, SMS, or push notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  AWS Account&lt;/li&gt;
&lt;li&gt;  Gmail Account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step — 1 : Create a IAM Role and Policy&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to AWS Management console .&lt;/li&gt;
&lt;li&gt; Create a IAM policy and provide EC2 , SNS and also Logs Permissions . Then click on “&lt;strong&gt;create policy”&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "Version": "2012-10-17",
 "Statement": [  {
   "Effect": "Allow",
   "Action": [    "logs:CreateLogGroup",
    "logs:CreateLogStream",
    "logs:PutLogEvents"
   ],
   "Resource": "arn:aws:logs:*:*:*"
  },
  {
   "Effect": "Allow",
   "Action": [    "ec2:Start*",
    "ec2:Stop*"
   ],
   "Resource": "*"
  }
 ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a IAM Role by clicking on the “&lt;strong&gt;create role”&lt;/strong&gt; and attach that policy to that newly created role. Select Trusted entity type as &lt;strong&gt;“AWS Service”&lt;/strong&gt; and also Select use case as “&lt;strong&gt;Lambda”&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae5igkcj45d75guvgmyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae5igkcj45d75guvgmyl.png" alt="captionless image" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select and attach the policy that you have created, then click on “&lt;strong&gt;create role” .&lt;/strong&gt; A role would be created with the desired permissions required .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma2ovh34d402x2cp9wy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma2ovh34d402x2cp9wy0.png" alt="Role created and policy attached" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step — 2 : Create 2 Lambda Functions for starting and stopping instance&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stopec2 Function
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to AWS Services → &lt;strong&gt;Lambda&lt;/strong&gt; and click on ‘&lt;strong&gt;Create function&lt;/strong&gt;’.&lt;/li&gt;
&lt;li&gt; Choose an &lt;strong&gt;Author from scratch&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Under &lt;strong&gt;Basic information&lt;/strong&gt;, enter the following information:
For &lt;strong&gt;Function name&lt;/strong&gt;, enter a name that identifies it as the function that’s used to stop your EC2 instances. For example, “&lt;strong&gt;stopec2&lt;/strong&gt;”.&lt;/li&gt;
&lt;li&gt; For &lt;strong&gt;Runtime,&lt;/strong&gt; choose &lt;strong&gt;Python 3.9&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Under Permissions,&lt;/strong&gt; expand Change default execution role&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Under the Execution role&lt;/strong&gt;, choose to &lt;strong&gt;Use an existing role&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Under Existing role&lt;/strong&gt;, choose the &lt;strong&gt;IAM role&lt;/strong&gt; that you created&lt;/li&gt;
&lt;li&gt; Choose &lt;strong&gt;Create function&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; After creating the Lambda Function, go to → Under Code, Code source, copy and paste the below Python code.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from datetime import datetime
# Configuration
region = 'ap-south-1'
instances = ['i-036bb14d2a7a28941']
sns_topic_arn = 'arn:aws:sns:ap-south-1:530424100396:stopec2'  # Replace with your actual SNS topic ARN
# Create clients
ec2 = boto3.client('ec2', region_name=region)
sns = boto3.client('sns', region_name=region)
def lambda_handler(event, context):
    # Stop the instance
    ec2.stop_instances(InstanceIds=instances)
    print('Stopped your instance: ' + str(instances))
    # Prepare and send SNS notification
    timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    message = f"EC2 instance {instances[0]} was stopped at {timestamp}."
    sns.publish(
        TopicArn=sns_topic_arn,
        Subject="EC2 Instance Stopped",
        Message=message
    )
    return {
        'statusCode': 200,
        'body': message
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Replace instance — id with your instance id and provide the region which your instance is been running .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Aftering altering the code , then click on &lt;strong&gt;“Deploy”&lt;/strong&gt; for deploying the code.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Startec2 Function
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Repeat the steps from 1–9 , but in this function you need to change the code . Because this code used for starting the EC2 instance .
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from datetime import datetime
# Configuration
region = 'ap-south-1'
instances = ['i-036bb14d2a7a28941']
sns_topic_arn = 'arn:aws:sns:ap-south-1:530424100396:startec2'  # replace with your actual SNS topic ARN
# Clients
ec2 = boto3.client('ec2', region_name=region)
sns = boto3.client('sns', region_name=region)
def lambda_handler(event, context):
    ec2.start_instances(InstanceIds=instances)
    print('Started your instance: ' + str(instances))

    # Compose notification
    timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
    message = f"✅ EC2 instance {instances[0]} started at {timestamp}."

    # Send notification via SNS
    sns.publish(
        TopicArn=sns_topic_arn,
        Subject='EC2 Instance Started',
        Message=message
    )
    return {
        'statusCode': 200,
        'body': message
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Click on “&lt;strong&gt;Deploy”&lt;/strong&gt; for deploying the code . And you may also click on “test” for testing the code manually for checking the code performance .&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step — 3 : Create 2 SNS Topics for start and stop instance notifications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6okojsjb5l1xe49mekhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6okojsjb5l1xe49mekhh.png" alt="Creating SNS topic" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Click on &lt;strong&gt;“Next Step”&lt;/strong&gt; and provide the details like &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;type&lt;/strong&gt; of the SNS . Then click on “&lt;strong&gt;create topic”&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx76gbr50yfo1azj9k4w7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx76gbr50yfo1azj9k4w7.png" alt="captionless image" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another topic with &lt;strong&gt;“standard”&lt;/strong&gt; as its type and also Provide the name as “&lt;strong&gt;stopec2rule” .&lt;/strong&gt; Click on “&lt;strong&gt;create topic”&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqri8dq76ok9gdqsymuea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqri8dq76ok9gdqsymuea.png" alt="captionless image" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After creation of the topics , create subscription for both the topics . Where you need to provide the type of the notifications to be received like “Email , SMS or Email Json” etc . Provide your mail — id if you wanna be notified via mail .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Later a Mail will be sent to your EMail for &lt;strong&gt;confirming the subscription,&lt;/strong&gt; so that the notifications will be forwarded to the email which you have confirmed .&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step — 4 : Create 2 Rules for Start and stop of EC2 instance under EventBridge Scheduler&lt;/strong&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  startec2Rule
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to the Lambda Function(startec2) and click on “&lt;strong&gt;add Trigger” .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Select “&lt;strong&gt;EventBridge(cloudwatch Events)”&lt;/strong&gt; as the trigger and provide the details .&lt;/li&gt;
&lt;li&gt; create a new rule and provide a name(&lt;strong&gt;startec2rule&lt;/strong&gt;) for the rule .&lt;/li&gt;
&lt;li&gt; Select “&lt;strong&gt;schedule expression”&lt;/strong&gt; as rule type.&lt;/li&gt;
&lt;li&gt; Provide the schedule expression in the “&lt;strong&gt;cron” format .&lt;/strong&gt; I have given a cron expression where the EC2 instance must start running at &lt;strong&gt;8:25pm&lt;/strong&gt; of &lt;strong&gt;23 may 2025 .&lt;/strong&gt; You can also the cron expression based on your comfort.&lt;/li&gt;
&lt;li&gt; click on “&lt;strong&gt;Add”&lt;/strong&gt; for adding a trigger .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnnr40fn6mwi0767nvi7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnnr40fn6mwi0767nvi7.png" alt="eventbridge as lambda Trigger" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  stopec2Rule
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Repeat the steps from 1–4 , in the 5th step you need to add a different cron expression for stopping the ec2 instance .&lt;/li&gt;
&lt;li&gt; Provide the schedule expression in the “&lt;strong&gt;cron” format .&lt;/strong&gt; I have given a cron expression where the EC2 instance must start running at &lt;strong&gt;8:40 pm&lt;/strong&gt; of &lt;strong&gt;23 may 2025 .&lt;/strong&gt; You can also the cron expression based on your comfort.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38jod79wxlgfbxn20bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv38jod79wxlgfbxn20bg.png" alt="eventbridge as lambda Trigger" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Results&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Starting of the EC2 instance and getting notified about the starting of the instance.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa68d0bi0nwhm6fe04xtr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa68d0bi0nwhm6fe04xtr.jpeg" alt="EC2 — instance being started" width="800" height="158"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyurhqgkd3p64xtoh1ge1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyurhqgkd3p64xtoh1ge1.jpeg" alt="EC2 instance startup notification" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Stopping of the EC2 instance and getting notified about the Stopping of the instance.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdax78n7v3qeh5qz0ywp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdax78n7v3qeh5qz0ywp.jpeg" alt="EC2 — instance being stopped" width="800" height="335"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpmpn7531s3gjvpou43f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpmpn7531s3gjvpou43f.jpeg" alt="EC2 instance stopped notification" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By scheduling start and stop actions through cron expressions, the instance operates only when needed, optimizing cost and efficiency. SNS notifications ensure real-time alerts on each action. The project is simple, cost-effective, and a great example of using AWS services to automate cloud operations.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsec2</category>
      <category>ec2scheduling</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Crack Password-Protected ZIP Files Using John the Ripper on Kali Linux</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Fri, 13 Jun 2025 08:36:54 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/how-to-crack-password-protected-zip-files-using-john-the-ripper-on-kali-linux-5fmf</link>
      <guid>https://forem.com/nikhilraj-2003/how-to-crack-password-protected-zip-files-using-john-the-ripper-on-kali-linux-5fmf</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figqmv92zkb8ztm6b7ykr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figqmv92zkb8ztm6b7ykr.png" alt="John the Ripper (password cracking software tool)" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn how to crack password-protected ZIP files using John the Ripper on Kali Linux in this step-by-step cybersecurity project.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;John the Ripper is a powerful and widely used open-source password cracking tool designed to test password strength and perform security audits. In this blog, we’ll walk through a practical, hands-on cybersecurity project where we use John the Ripper in Kali Linux to crack a ZIP file password. This exercise is ideal for cybersecurity students and beginners looking to understand password hashing and cracking fundamentals in a controlled, ethical environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is John the Ripper?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;John the Ripper (JTR) is an advanced password recovery tool used in penetration testing and digital forensics. It supports various hash types and file formats, including ZIP, RAR, Linux shadow files, and more. It works by attempting dictionary or brute-force attacks on hashed passwords to recover the original plaintext passwords.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Used a ZIP File
&lt;/h2&gt;

&lt;p&gt;We used a ZIP file because it’s a widely supported and beginner-friendly archive format that allows password protection. It integrates smoothly with John the Ripper through the &lt;code&gt;zip2john&lt;/code&gt; utility, making it easy to extract password hashes. Compared to other formats like RAR or PDF, ZIP files are quicker to set up and crack, making them ideal for educational and demonstration purposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Project Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For this project, we created a password-protected ZIP file. We used Kali Linux as our ethical hacking environment and accessed it via Remote Desktop Protocol (RDP).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Steps Overview:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Create a ZIP file with a password.&lt;/li&gt;
&lt;li&gt; Start Kali Linux with &lt;code&gt;sudo service xrdp start&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Use &lt;code&gt;ip add&lt;/code&gt; to obtain the IP address of Kali.&lt;/li&gt;
&lt;li&gt; Connect via RDP and login.&lt;/li&gt;
&lt;li&gt; Transfer the ZIP file to Kali Linux Desktop.&lt;/li&gt;
&lt;li&gt; Use John the Ripper to extract and crack the password.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create a Password-Protected ZIP File :&lt;/strong&gt;
Choose an existing file and archive it into a ZIP format.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbfow8dlxhi1hzffsd5k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbfow8dlxhi1hzffsd5k.jpeg" alt="creating a file into ZIP format" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Secure the file with a password to make sure no one can access the file for sensitive information present inside the file. (e.g., &lt;code&gt;121314&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhzql8hlrl0tlgrma309.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhzql8hlrl0tlgrma309.jpeg" alt="Providing password for the ZIP file" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Start Kali Linux Environment :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jlhhgieyid6cs90lzxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jlhhgieyid6cs90lzxw.png" alt="starting the kali-linux service" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Launch Kali Linux and run the command to start the XRDP service. This allows us to &lt;strong&gt;start&lt;/strong&gt; the Kali Linux Service. After entering the command, the system prompts us to enter the system password for authentication .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service xrdp start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Finding the IP Address of Kali Linux :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0uhuzxsm6cj3bh0lcld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0uhuzxsm6cj3bh0lcld.png" alt="Finding the IP address" width="743" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To connect to the Kali Linux machine from another desktop or device — especially when retrieving files like passwords — we’ll need its IP address. This address acts as a unique identifier on the network.&lt;/p&gt;

&lt;p&gt;To find it, open the terminal in Kali Linux and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip add
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output, locate the IP address assigned to your system. In our case, the IP address was &lt;em&gt;172.26.123.22&lt;/em&gt; . This IP will be used later when establishing a remote desktop session or transferring files to and from Kali Linux.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Connecting to Kali Linux via Remote Desktop :&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh65h6zao7anpegl26mq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh65h6zao7anpegl26mq.png" alt="connection of kali-linux" width="458" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have the IP address of our Kali Linux machine, it’s time to connect to it remotely from another device or laptop.&lt;/p&gt;

&lt;p&gt;On your Windows system, open the &lt;strong&gt;Remote Desktop Connection&lt;/strong&gt; app (you can simply search for it in the Start menu). Once it launches, you’ll see a field where you need to enter the IP address — in our case, it’s &lt;code&gt;172.26.123.22&lt;/code&gt;. After typing it in, click &lt;strong&gt;Connect&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A login screen will appear asking for your Kali Linux credentials. Just enter your username and password, and you’ll be logged into the Kali desktop environment — all from your remote device!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Logging into Kali Linux :&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the remote connection is established, you’ll be redirected to the Kali Linux login screen. Here, simply enter the &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt; you set up earlier during the Kali installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4infxqoc3qdyjl2ksdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4infxqoc3qdyjl2ksdi.png" alt="logging-in using username and password" width="602" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After logging in successfully, you’ll have full access to the Kali Linux desktop environment — ready to explore its powerful tools and features, all from your remote device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Transfer the ZIP File :&lt;/strong&gt;&lt;br&gt;
Once you’re logged into Kali Linux through the remote desktop, the next step is to transfer the file you previously created on your main desktop. This file needs to be copied and pasted into the &lt;strong&gt;Kali Linux desktop environment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Preparing the File for John the Ripper :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15170ntoz1020677t07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd15170ntoz1020677t07.png" alt="changing the directory" width="756" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successfully logging into Kali Linux, the next step is to transfer the file you created earlier on your main desktop to the &lt;strong&gt;Kali Linux desktop&lt;/strong&gt;. This makes the file easily accessible for the &lt;strong&gt;John the Ripper&lt;/strong&gt; tool, simplifying the cracking process.&lt;/p&gt;

&lt;p&gt;Once the file is pasted onto the Kali desktop, open the &lt;strong&gt;Terminal&lt;/strong&gt; to proceed. To navigate to the desktop where the file is located, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd Desktop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command changes the current working directory to the desktop, allowing you to interact with the file directly from the terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Extracting the Hash from the ZIP File :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the Directory now pointed to the desktop, we can begin using &lt;strong&gt;John the Ripper&lt;/strong&gt;. To extract the hash from the ZIP file, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo zip2john cybersecurity.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this command, &lt;code&gt;zip2john&lt;/code&gt; is the tool that processes the ZIP file, and &lt;code&gt;cybersecurity.zip&lt;/code&gt; is the name of the file you want to crack. Make sure you replace the filename if your ZIP file has a different name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5arlzij3ufs053mnnhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5arlzij3ufs053mnnhr.png" alt="Extracting the file into Hash format" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After running the command, the system will prompt you to enter your &lt;strong&gt;sudo (admin) password&lt;/strong&gt; for authentication. Once authenticated, the tool will output the &lt;strong&gt;encrypted hash&lt;/strong&gt; of the ZIP file — this is the data that John the Ripper will attempt to crack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Saving the Hash to a Text File :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The encrypted data output by the &lt;code&gt;zip2john&lt;/code&gt; command is in &lt;strong&gt;hash format&lt;/strong&gt;, which John the Ripper can analyze and crack efficiently. To make the process smoother, we need to save this hash into a text file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzqit78tsszshlwsw96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzqit78tsszshlwsw96.png" alt="Forwarding the hash output to a text file" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can do this by running the following command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo zip2john cybersecurity.zip &amp;gt; hash.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command redirects the hashed output into a file named &lt;code&gt;hash.txt&lt;/code&gt;. By doing this, we allow John the Ripper to focus directly on the hash file, making the password-cracking process more streamlined and effective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Cracking the Password Using John the Ripper :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that the hash has been successfully saved into a text file (&lt;code&gt;hash.txt&lt;/code&gt;), it’s time to use &lt;strong&gt;John the Ripper&lt;/strong&gt; to crack the password.&lt;/p&gt;

&lt;p&gt;Run the following command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;john hash.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells John the Ripper to begin analyzing the hash and attempt to recover the original password.&lt;/p&gt;

&lt;p&gt;After a few moments, the tool will display the cracked password. In our case, it revealed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;121314
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see the password appear alongside the filename on the terminal screen. And just like that — the password-protected ZIP file has been cracked successfully!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwtcxbdw99ilrzayphrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwtcxbdw99ilrzayphrv.png" alt="Password Retireived succesfully" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Results&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;John the Ripper successfully cracked the ZIP file password. The output displayed the plaintext password next to the filename, verifying the tool’s capability to efficiently perform dictionary-based cracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project demonstrated how ethical hackers and cybersecurity students can use John the Ripper to test the strength of password-protected files. It reinforces the importance of using strong, complex passwords and the need for continuous security awareness.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>passwordcracking</category>
      <category>kalilinux</category>
      <category>beginnerethicalhacking</category>
    </item>
    <item>
      <title>Learn how to design and deploy a 3‑tier web architecture on AWS — step‑by‑step guide.</title>
      <dc:creator>Nikhil Raj A </dc:creator>
      <pubDate>Thu, 12 Jun 2025 14:27:37 +0000</pubDate>
      <link>https://forem.com/nikhilraj-2003/learn-how-to-design-and-deploy-a-3-tier-web-architecture-on-aws-step-by-step-guide-54ao</link>
      <guid>https://forem.com/nikhilraj-2003/learn-how-to-design-and-deploy-a-3-tier-web-architecture-on-aws-step-by-step-guide-54ao</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96yue7hnqb5w380yai67.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96yue7hnqb5w380yai67.gif" alt="VPC 3-tier layout AWS diagram " width="800" height="878"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A 3-tier architecture divides a web application into three layers: &lt;strong&gt;Web (presentation)&lt;/strong&gt;, &lt;strong&gt;Application (logic)&lt;/strong&gt;, and &lt;strong&gt;Database (data)&lt;/strong&gt;. This design improves &lt;strong&gt;scalability, security, and maintainability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Using AWS services like &lt;strong&gt;EC2&lt;/strong&gt;, &lt;strong&gt;Elastic Load Balancer&lt;/strong&gt;, &lt;strong&gt;RDS&lt;/strong&gt;, and &lt;strong&gt;VPC&lt;/strong&gt;, we can build a cloud-based 3-tier architecture where each layer is &lt;strong&gt;logically separated&lt;/strong&gt; and &lt;strong&gt;independently scalable&lt;/strong&gt;, ensuring better performance and reliability for modern web applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a VPC?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VPC&lt;/strong&gt; stands for &lt;strong&gt;Virtual Private Cloud&lt;/strong&gt;.&lt;br&gt;
It is a &lt;strong&gt;logically isolated&lt;/strong&gt; section of the &lt;strong&gt;AWS Cloud&lt;/strong&gt; where you can &lt;strong&gt;launch AWS resources&lt;/strong&gt; (like EC2, RDS, etc.) in a &lt;strong&gt;virtual network&lt;/strong&gt; that &lt;strong&gt;you define&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why we need a VPC???
&lt;/h2&gt;

&lt;p&gt;We need a &lt;strong&gt;VPC (Virtual Private Cloud)&lt;/strong&gt; to create a &lt;strong&gt;secure, isolated network environment&lt;/strong&gt; within AWS where we can launch and manage resources like EC2, RDS, and load balancers. It gives us full control over &lt;strong&gt;IP addressing, subnets, routing, and access rules&lt;/strong&gt;, allowing us to build architectures with &lt;strong&gt;public and private zones&lt;/strong&gt;, enforce &lt;strong&gt;security&lt;/strong&gt;, and connect securely to the internet or on-premises networks.&lt;/p&gt;


&lt;h2&gt;
  
  
  The architecture consists of:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Web Application Tier (Presentation Layer)&lt;/strong&gt;: This tier handles the user interface and user experience, providing a seamless interaction platform for customers. Thinking of an online clothing store, this is where customers browse through clothing items, view details of each product, add items to their cart, and proceed to checkout.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application Tier (Business Logic):&lt;/strong&gt; This tier processes the core functionalities of the e-commerce platform, including order processing, user authentication, and product catalog management. In the context of an online clothing store, this tier manages the logic for adding items to the cart, processing payments, managing user accounts, and updating inventory levels.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;3. Data Tier (Database):&lt;/strong&gt; This tier securely stores all the critical data, such as customer information, transaction records, and product details. For an online clothing store, this includes storing details of each clothing item, customer profiles, order histories, and payment information.&lt;/p&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  AWS Account&lt;/li&gt;
&lt;li&gt;  EC2&lt;/li&gt;
&lt;li&gt;  VPC&lt;/li&gt;
&lt;li&gt;  Subnets&lt;/li&gt;
&lt;li&gt;  Internet Gateway and NAT Gateway.&lt;/li&gt;
&lt;li&gt;  Elastic Load Balancer (ELB)&lt;/li&gt;
&lt;li&gt;  RDS (MySQL/PostgreSQL)&lt;/li&gt;
&lt;li&gt;  Security Groups and Network ACLs&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out Medium Profile : &lt;a href="https://medium.com/@nikhilsiri2003" rel="noopener noreferrer"&gt;https://medium.com/@nikhilsiri2003&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clone the github Repo for accessing the application code into your local&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/NikhilRaj-2003/3-tier-architecture-using-aws.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 1 : Create a IAM role for EC2 instance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Under IAM Dashboard &amp;gt; Accesss Management &amp;gt; Roles , under that click on &lt;strong&gt;create role .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Select &lt;strong&gt;AWS Service&lt;/strong&gt; as the trusted entity type , and use case as EC2.&lt;/li&gt;
&lt;li&gt; Under &lt;strong&gt;permission policies&lt;/strong&gt; , search the policy like &lt;strong&gt;AmazonSSMManagedInstance&lt;/strong&gt; and &lt;strong&gt;AmazonS3readonly.&lt;/strong&gt;Then Click on &lt;strong&gt;Next.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for the Role , then Click on &lt;strong&gt;create role.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxugr0swhuzggxu0t8d9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxugr0swhuzggxu0t8d9i.png" alt="IAM Role created" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2 : Building the Architecture
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Create a VPC with 2 Public subnets for the Web-Tier and 4 Private subnets , under those 4 private subnets 2 are for the Application-Tier and rest of the 2 Private subnets are for the Database .&lt;/li&gt;
&lt;li&gt; Click on &lt;strong&gt;Your VPC &amp;gt; Create VPC .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcjtvwiv0pgimpmqiy0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcjtvwiv0pgimpmqiy0j.png" alt="VPC creation" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;VPC only&lt;/strong&gt; if your just creating a VPC . You can also choose &lt;strong&gt;VPC and more&lt;/strong&gt; if you want to create vpc , subnets , route tables , internet-gateway and NAT-gateway in just one-shot .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a name for the VPC , and also the IPv4 CIDR (10.0.0.0/16) . Then click on &lt;strong&gt;create VPC.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;After Creation of VPC , create public and private subnet&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Under subnet section , click on &lt;strong&gt;create subnet&lt;/strong&gt; to create the subnets .&lt;/li&gt;
&lt;li&gt; Select the VPC you have created first , then start to build the subnets .&lt;/li&gt;
&lt;li&gt; Create 2 public subnet with 2 availability zones and also provide the &lt;strong&gt;IPv4 subnet CIDR block.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ru3l9j1c0fhvhbqmxsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ru3l9j1c0fhvhbqmxsa.png" alt="subnet creation" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like this shown above you need to create 5 more subnets , then the layers will be shown in the resource map of the VPC for better understanding&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88lcgr40s4tdmg8smz9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88lcgr40s4tdmg8smz9f.png" alt="Total number of subnets" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now create Internet gateway after subnet creation .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Go to VPC &amp;gt; Internet gateways would be present on the left side . Then click on &lt;strong&gt;create internet gateway&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for the internet-gateway , then click on &lt;strong&gt;create internet gateway .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Later select that internet gateway &amp;gt; Actions &amp;gt; attach to VPC , now your internet gateway has been attached to the VPC .&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;Then create NAT gateway after Internet gateway&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Under VPC &amp;gt; Nat gateways &amp;gt; click on &lt;strong&gt;create on NAT gateway .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xtv5aiwhmbgwec8zmgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xtv5aiwhmbgwec8zmgg.png" alt="captionless image" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provide a name for the NAT-gateway , Select Either of the 2 public subnet and &lt;strong&gt;allocate elastic IP .&lt;/strong&gt; Remember that &lt;strong&gt;Elastic IP address&lt;/strong&gt; is allocated only to public subnet . Then click on &lt;strong&gt;create NAT gateway .&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;blockquote&gt;
&lt;p&gt;After Nat-gateway , then create Route tables&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Add a route that directs traffic from the VPC to the internet gateway. In other words, for all traffic destined for IPs outside the VPC CDIR range, we add an entry that directs it to the internet gateway as a target.&lt;/li&gt;
&lt;li&gt; Under VPC dashboard &amp;gt; Route tables &amp;gt; Provide a name for the route table &amp;gt; VPC: select your VPC &amp;gt; Edit routes &amp;gt; Add route &amp;gt; Target: Internet Gateway &amp;gt; Save changes.&lt;/li&gt;
&lt;li&gt; In Subnet associations of the route table &amp;gt; Edit explicit subnet associations &amp;gt; Select the two Public Subnets &amp;gt; Save associations .We will create 2 more route tables, one for each of the App layer private subnet in each availability zone. These route tables will route app layer traffic destined for outside the VPC to the NAT gateway in the respective availability zone. Let’s add the appropriate routes for this scenario.&lt;/li&gt;
&lt;li&gt; Under VPC &amp;gt; Route tables &amp;gt; Create route table &amp;gt; Name the table: Private_Route_AZ1 &amp;gt; Select your VPC &amp;gt; Create &amp;gt; Routes &amp;gt; Edit routes &amp;gt; Destination: Everywhere &amp;gt; Target: Nat Gateway &amp;gt; NAT gtw for AZ 1 ( repeat the steps for NAT gtw AZ 2)&lt;/li&gt;
&lt;li&gt; Edit routes for each of the 2 Route tables: Edit subnet association &amp;gt; Select the Private Subnet for AZ 1 (repeat for AZ 2)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3 : Create 5 Security groups for app , web and database tier&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  **Security groups :
**it is used to tighten the rules around which traffic will be allowed to our Elastic Load Balancers and EC2 instances. This being said, will need 5 security groups: one for the External LB, one for the Internal LB, and one for each of the 3 tiers: Web, App and DB tier.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Web-tier security group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Go to security group under EC2 . Then click on &lt;strong&gt;create security group .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for the security group , then select the VPC that you have created .&lt;/li&gt;
&lt;li&gt; Then provide the in-bound rules for allowing the incoming traffic with desired port number .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jtutss1fzv2589kyvqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jtutss1fzv2589kyvqq.png" alt="web-tier security group" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Web-ALB security group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Provide a security name for the security group .&lt;/li&gt;
&lt;li&gt; Then give a description for the security group (optional).&lt;/li&gt;
&lt;li&gt; Select the VPC which you have created .&lt;/li&gt;
&lt;li&gt; Later assign the bound rules which proper port-number like HTTP and HTPPS. Then click on &lt;strong&gt;create security group.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz39gwe67zs6ho3jl147h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz39gwe67zs6ho3jl147h.png" alt="web-alb security-group" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Application-tier security group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Firstly remember that the application is gonna run on port number-4000 , so we need to allow incoming traffic with port number 4000 using custom TCP .&lt;/li&gt;
&lt;li&gt; Name the security group for the application-tier . Then description is given (optional).&lt;/li&gt;
&lt;li&gt; Later select the VPC which is created by you .&lt;/li&gt;
&lt;li&gt; Assign the in-bound rules with custom TCP , port range 4000 . CLick on &lt;strong&gt;create security group.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfy4apfdvetrrmtz8g8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfy4apfdvetrrmtz8g8i.png" alt="security group for app-tier" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Application-Internal-alb security group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; This is a internal application load balancer , and it also runs o port number 80 .&lt;/li&gt;
&lt;li&gt; Provide a name for the security group with description as optional .&lt;/li&gt;
&lt;li&gt; Then select the VPC which is been created .&lt;/li&gt;
&lt;li&gt; Assgin the in-bound the rules with HTTP and open for port 80 .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7s4vlkhhf6w18ah451.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7s4vlkhhf6w18ah451.png" alt="security group for app-int-alb" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Database-tier security group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Before creation of the security group remember that the &lt;strong&gt;mysql&lt;/strong&gt; runs on port number &lt;strong&gt;3306 .&lt;/strong&gt; So we need allow port 3306 in the in-bound rules .&lt;/li&gt;
&lt;li&gt; Create a security group for database -tier . Provide a name for the security group .&lt;/li&gt;
&lt;li&gt; Then give the description for the security group(optional).&lt;/li&gt;
&lt;li&gt; Select the VPC which was created in the beginning .&lt;/li&gt;
&lt;li&gt; Under in-bound rules select type as &lt;strong&gt;MYSQL/Aurora&lt;/strong&gt; and then Port number as &lt;strong&gt;3306 .&lt;/strong&gt; Later click on &lt;strong&gt;create security group.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmo3jnz7zont6j9s8xkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmo3jnz7zont6j9s8xkk.png" alt="security group for database-tier" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4 : Create a RDS database and DB subnet group
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Creating DB subnet group&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2skexe45nkv89uin26cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2skexe45nkv89uin26cy.png" alt="captionless image" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Provide a name for the DB subnet group , so that we can identify them easily&lt;/li&gt;
&lt;li&gt; Choose the VPC that was created in the beginning .&lt;/li&gt;
&lt;li&gt; Choose the availability zones , select 2 zones ( &lt;strong&gt;ap-south-1a&lt;/strong&gt; &amp;amp; &lt;strong&gt;ap-south-1b)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Then &lt;strong&gt;under add subnets ,&lt;/strong&gt; select the &lt;strong&gt;private DB subnets&lt;/strong&gt; created earlier .&lt;/li&gt;
&lt;li&gt; Click on &lt;strong&gt;Create&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcwo0ehfxco5adn5c5dr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcwo0ehfxco5adn5c5dr.png" alt="DB subent group" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Creating DB instances&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; Select &lt;strong&gt;standard create&lt;/strong&gt; for database creation method .&lt;/li&gt;
&lt;li&gt; Go for &lt;strong&gt;MYSQL&lt;/strong&gt; for the engine type under engine options .&lt;/li&gt;
&lt;li&gt; Then select &lt;strong&gt;MYSQL 8.0.35&lt;/strong&gt; as the Engine Version&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foffl3tchx4cqgjgen6fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foffl3tchx4cqgjgen6fa.png" alt="Engine options" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under templates select the &lt;strong&gt;Free-tier template&lt;/strong&gt; , so that you wont be charged .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide the Username or use default username . Click on &lt;strong&gt;self-managed&lt;/strong&gt; for the credentials management . Later provide a custom password and confirm the password .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select DB instances class as &lt;strong&gt;d4.t4g.micro , gp2(&lt;/strong&gt;General Purpose SSD*&lt;em&gt;)&lt;/em&gt;* for the storage and the allocated storage would be 20 .&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofhyxluttxzl3qiq6dfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofhyxluttxzl3qiq6dfi.png" alt="DB confiuration" width="800" height="707"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Then select the VPC which was created , later select the DB subnet group . Click on No for &lt;strong&gt;public access .&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the existing security group as DB-sg which we had created while creating the security groups . Also select either of the availability zones (&lt;strong&gt;ap-south-1a&lt;/strong&gt; or &lt;strong&gt;ap-south-1b&lt;/strong&gt;). Then click on &lt;strong&gt;create Database .&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu2jsdvyoymhjo7gmjhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu2jsdvyoymhjo7gmjhr.png" alt="VPC and subnet group allocation" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5 : Launch an EC2 instance (Application-tier)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9hnxsvsif6vdnevl0ou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9hnxsvsif6vdnevl0ou.png" alt="EC2 Instance" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Provide a name { app-ec2} for the EC2 instance .&lt;/li&gt;
&lt;li&gt; Select Amazon Linux 2 as the AMI(application machine image)&lt;/li&gt;
&lt;li&gt; Instance type as &lt;strong&gt;t2.micro .&lt;/strong&gt; Select the key-pair .&lt;/li&gt;
&lt;li&gt; Under Network settings &amp;gt; VPC select the newly created VPC . We are creating application tier instance so assign it with &lt;strong&gt;private-app-subnet (1 or 2 ).&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide the existing security group which we had created , that is &lt;strong&gt;app-sg&lt;/strong&gt; which runs on port number 4000. Select the IAM role which was created earlier . Then click on &lt;strong&gt;Launch instance.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; After launching the instance connect the instance using session manager . You cant connect through SSH because its running on private subnet and there will be no public-IP.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Configuration of Application-tier EC2 instance&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;app-tier setup : &lt;a href="https://github.com/NikhilRaj-2003/3-tier-architecture-using-aws/blob/main/Implementation/Application-tier.md" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/3-tier-architecture-using-aws/blob/main/Implementation/Application-tier.md&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a Load Balancer with Target groups for Application-tier&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Target Group
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Choose target type as &lt;strong&gt;instances&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for the target group , and your app-ec2 is running on port 4000 so you need to open the port 4000 in the target group.&lt;/li&gt;
&lt;li&gt; Then Click on &lt;strong&gt;next .&lt;/strong&gt; Select the instances which you need to add into the target group and also click on &lt;strong&gt;include as pending below .&lt;/strong&gt; click on &lt;strong&gt;Create target group&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Load Balancer
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Choose &lt;strong&gt;Application Load balancer&lt;/strong&gt; as the load balancer types&lt;/li&gt;
&lt;li&gt; Provide a name for the load balancer , and select scheme as internal .&lt;/li&gt;
&lt;li&gt; Select the VPC which we created . And the select the &lt;strong&gt;private-app-subnets&lt;/strong&gt; with 2 availiability zones . Choose the &lt;strong&gt;app-int-lb&lt;/strong&gt; security which we created .&lt;/li&gt;
&lt;li&gt; Under listeners and routing &amp;gt; Listener select the target group created for forwarding the traffic .&lt;/li&gt;
&lt;li&gt; Then click on &lt;strong&gt;create load balancer.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 6 : Launch an EC2 instance ( Web-tier)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9hnxsvsif6vdnevl0ou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9hnxsvsif6vdnevl0ou.png" alt="EC2 Instance" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Provide a name {Web-ec2} for the EC2 instance .&lt;/li&gt;
&lt;li&gt; Select Amazon Linux 2 as the AMI(application machine image)&lt;/li&gt;
&lt;li&gt; Instance type as &lt;strong&gt;t2.micro .&lt;/strong&gt; Select the key-pair .&lt;/li&gt;
&lt;li&gt; Under Network settings &amp;gt; VPC select the newly created VPC . We are creating application tier instance so assign it with &lt;strong&gt;public-subnet (1 or 2)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Also Enable &lt;strong&gt;Auto-assign public-IP&lt;/strong&gt; for assigning the IP-address .&lt;/li&gt;
&lt;li&gt; Provide the existing security group which we had created , that is &lt;strong&gt;web-sg .&lt;/strong&gt; Select the IAM role which was created earlier . Then click on &lt;strong&gt;Launch instance.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Configuration of Web-Tier EC2 instance&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;web-tier setup : &lt;a href="https://github.com/NikhilRaj-2003/3-tier-architecture-using-aws/blob/main/Implementation/Web-tier.md" rel="noopener noreferrer"&gt;https://github.com/NikhilRaj-2003/3-tier-architecture-using-aws/blob/main/Implementation/Web-tier.md&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccil1nqt8himqvm814y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccil1nqt8himqvm814y6.png" alt="web-page of the application" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now you can provide the data into it and all the data will be stored into the database that we had created .
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2wftbh1fak736xevuti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2wftbh1fak736xevuti.png" alt="Inserted the data" width="667" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create Load Balancer with Target group for Web-tier&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Target Group
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Choose target type as &lt;strong&gt;instances&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Provide a name for the target group , and your web-ec2 is running on port 80 so you need to open the port 80 in the target group.&lt;/li&gt;
&lt;li&gt; Then Click on &lt;strong&gt;next .&lt;/strong&gt; Select the instances which you need to add into the target group and also click on &lt;strong&gt;include as pending below .&lt;/strong&gt; click on &lt;strong&gt;Create target group&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Load Balancer
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Choose &lt;strong&gt;Application Load balancer&lt;/strong&gt; as the load balancer types&lt;/li&gt;
&lt;li&gt; Provide a name for the load balancer , and select scheme as Internet-facing.&lt;/li&gt;
&lt;li&gt; Select the VPC which we created . And the select the &lt;strong&gt;public-web-subnets&lt;/strong&gt; with 2 availiability zones . Choose &lt;strong&gt;web-alb-sg&lt;/strong&gt; security group which we created .&lt;/li&gt;
&lt;li&gt; Under listeners and routing &amp;gt; Listener select the target group created for forwarding the traffic .&lt;/li&gt;
&lt;li&gt; Then click on &lt;strong&gt;create load balancer.&lt;/strong&gt; After creation of the load balancer you can access the website using load balancer DNS name .&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt; If you dont want to use the Load Balancer DNS name then create a custom DNS name with &lt;strong&gt;A Record .&lt;/strong&gt; Later you can access the website using your custom domain.&lt;/li&gt;
&lt;li&gt; And the 2 EC2 instance which you have created , take their application machine image and create their image and launch template by providing the name , select the AMI which you created , with created VPC , subnets and exisitng security group . click on &lt;strong&gt;launch template .&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Then create autoscaling group for both application-tier and Web-tier.
Select the launch template created for app-tier and web-tier , Assign the VPC that is created , and their desired subnets , with load balancer, Then click on &lt;strong&gt;Create autoscaling group .&lt;/strong&gt;Which will create multiple ec2 instances for both app-tier and web tier . It will autoatically be attached to load balancer in the target groups.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A 3-tier architecture separates an application into three logical layers: &lt;strong&gt;web (presentation)&lt;/strong&gt;, &lt;strong&gt;app (business logic)&lt;/strong&gt;, and &lt;strong&gt;database (data storage)&lt;/strong&gt;. This structure enhances &lt;strong&gt;scalability&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt;, and &lt;strong&gt;maintenance&lt;/strong&gt; by isolating each layer. It allows each tier to be managed, updated, and scaled independently. In cloud environments like AWS, it’s a best-practice model for building robust and modular applications.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AWS&lt;/code&gt; &lt;code&gt;3-Tier-Architecture&lt;/code&gt; &lt;code&gt;EC2&lt;/code&gt; &lt;code&gt;VPC&lt;/code&gt; &lt;code&gt;Designing&lt;/code&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>3tierarchitecture</category>
      <category>designing</category>
    </item>
  </channel>
</rss>
