<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vishal Raju</title>
    <description>The latest articles on Forem by Vishal Raju (@vishal_raju_6a7ca9503a75b).</description>
    <link>https://forem.com/vishal_raju_6a7ca9503a75b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vishal_raju_6a7ca9503a75b"/>
    <language>en</language>
    <item>
      <title>Zomato Clone: Secure Deployment with DevSecOps CI/CD</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Fri, 10 Jan 2025 17:52:43 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/zomato-clone-secure-deployment-with-devsecops-cicd-3mcj</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/zomato-clone-secure-deployment-with-devsecops-cicd-3mcj</guid>
      <description>&lt;p&gt;Table of contents&lt;br&gt;
Steps:-&lt;br&gt;
Step 1:Launch an Ubuntu(22.04) T2 Large Instance&lt;br&gt;
Step 2 — Install Jenkins, Docker and Trivy&lt;br&gt;
2A — To Install Jenkins&lt;br&gt;
2B — Install Docker&lt;br&gt;
2C — Install Trivy&lt;br&gt;
Step 3 — Install Plugins like JDK, Sonarqube Scanner, NodeJs, OWASP Dependency Check&lt;br&gt;
3A — Install Plugin&lt;br&gt;
3B — Configure Java and Nodejs in Global Tool Configuration&lt;br&gt;
3C — Create a Job&lt;br&gt;
Step 4 — Configure Sonar Server in Manage Jenkins&lt;br&gt;
Step 5 — Install OWASP Dependency Check Plugins&lt;br&gt;
Step 6 — Docker Image Build and Push&lt;br&gt;
Step 8: Terminate instances.&lt;br&gt;
Complete Pipeline&lt;/p&gt;

&lt;p&gt;Show less&lt;br&gt;
Hey there! Get ready for an exciting journey as we embark on deploying a React JS Zomato clone. Our trusty companion on this adventure is Jenkins, serving as our CI/CD tool, while the magic happens inside a Docker container. Dive into the details in this comprehensive blog — your go-to guide for the entire process.&lt;/p&gt;

&lt;p&gt;Steps:-&lt;br&gt;
Step 1 — Launch an Ubuntu(22.04) T2 Large Instance&lt;/p&gt;

&lt;p&gt;Step 2 — Install Jenkins, Docker and Trivy. Create a Sonarqube Container using Docker.&lt;/p&gt;

&lt;p&gt;Step 3 — Install Plugins like JDK, Sonarqube Scanner, Nodejs, and OWASP Dependency Check.&lt;/p&gt;

&lt;p&gt;Step 4 — Create a Pipeline Project in Jenkins using a Declarative Pipeline&lt;/p&gt;

&lt;p&gt;Step 5 — Install OWASP Dependency Check Plugins&lt;/p&gt;

&lt;p&gt;Step 6 — Docker Image Build and Push&lt;/p&gt;

&lt;p&gt;Step 7 — Deploy the image using Docker&lt;/p&gt;

&lt;p&gt;Step 8 — Terminate the AWS EC2 Instances.&lt;/p&gt;

&lt;p&gt;Now, let’s get started and dig deeper into each of these steps:-&lt;/p&gt;

&lt;p&gt;Step 1:Launch an Ubuntu(22.04) T2 Large Instance&lt;br&gt;
Launch an AWS T2 Large Instance. Use the image as Ubuntu. You can create a new key pair or use an existing one. Enable HTTP and HTTPS settings in the Security Group and open all ports (not best case to open all ports but just for learning purposes it’s okay).&lt;/p&gt;

&lt;p&gt;Step 2 — Install Jenkins, Docker and Trivy&lt;br&gt;
2A — To Install Jenkins&lt;br&gt;
Connect to your console, and enter these commands to Install Jenkins&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
vim jenkins.sh&lt;br&gt;
COPY&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;p&gt;sudo apt update -y&lt;/p&gt;

&lt;h1&gt;
  
  
  sudo apt upgrade -y
&lt;/h1&gt;

&lt;p&gt;wget -O - &lt;a href="https://packages.adoptium.net/artifactory/api/gpg/key/public" rel="noopener noreferrer"&gt;https://packages.adoptium.net/artifactory/api/gpg/key/public&lt;/a&gt; | tee /etc/apt/keyrings/adoptium.asc&lt;br&gt;
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] &lt;a href="https://packages.adoptium.net/artifactory/deb" rel="noopener noreferrer"&gt;https://packages.adoptium.net/artifactory/deb&lt;/a&gt; $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list&lt;br&gt;
sudo apt update -y&lt;br&gt;
sudo apt install temurin-17-jdk -y&lt;br&gt;
/usr/bin/java --version&lt;br&gt;
curl -fsSL &lt;a href="https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key" rel="noopener noreferrer"&gt;https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key&lt;/a&gt; | sudo tee \&lt;br&gt;
                  /usr/share/keyrings/jenkins-keyring.asc &amp;gt; /dev/null&lt;br&gt;
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \&lt;br&gt;
                  &lt;a href="https://pkg.jenkins.io/debian-stable" rel="noopener noreferrer"&gt;https://pkg.jenkins.io/debian-stable&lt;/a&gt; binary/ | sudo tee \&lt;br&gt;
                              /etc/apt/sources.list.d/jenkins.list &amp;gt; /dev/null&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install jenkins -y&lt;br&gt;
sudo systemctl start jenkins&lt;br&gt;
sudo systemctl status jenkins&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
sudo chmod 777 jenkins.sh&lt;br&gt;
./jenkins.sh    # this will installl jenkins&lt;br&gt;
Once Jenkins is installed, you will need to go to your AWS EC2 Security Group and open Inbound Port 8080, since Jenkins works on Port 8080.&lt;/p&gt;

&lt;p&gt;Now, grab your Public IP Address&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
&lt;br&gt;
sudo cat /var/lib/jenkins/secrets/initialAdminPassword&lt;/p&gt;

&lt;p&gt;Unlock Jenkins using an administrative password and install the suggested plugins.&lt;/p&gt;

&lt;p&gt;Jenkins will now get installed and install all the libraries.&lt;/p&gt;

&lt;p&gt;Create a user click on save and continue.&lt;/p&gt;

&lt;p&gt;Jenkins Getting Started Screen.&lt;/p&gt;

&lt;p&gt;2B — Install Docker&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
sudo apt-get update&lt;br&gt;
sudo apt-get install docker.io -y&lt;br&gt;
sudo usermod -aG docker $USER   #my case is ubuntu&lt;br&gt;
newgrp docker&lt;br&gt;
sudo chmod 777 /var/run/docker.sock&lt;br&gt;
After the docker installation, we create a sonarqube container (Remember to add 9000 ports in the security group).&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community&lt;/p&gt;

&lt;p&gt;Now our sonarqube is up and running&lt;/p&gt;

&lt;p&gt;Enter username and password, click on login and change password&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
username admin&lt;br&gt;
password admin&lt;/p&gt;

&lt;p&gt;Update New password, This is Sonar Dashboard.&lt;/p&gt;

&lt;p&gt;2C — Install Trivy&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
vim trivy.sh&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
sudo apt-get install wget apt-transport-https gnupg lsb-release -y&lt;br&gt;
wget -qO - &lt;a href="https://aquasecurity.github.io/trivy-repo/deb/public.key" rel="noopener noreferrer"&gt;https://aquasecurity.github.io/trivy-repo/deb/public.key&lt;/a&gt; | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg &amp;gt; /dev/null&lt;br&gt;
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] &lt;a href="https://aquasecurity.github.io/trivy-repo/deb" rel="noopener noreferrer"&gt;https://aquasecurity.github.io/trivy-repo/deb&lt;/a&gt; $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list&lt;br&gt;
sudo apt-get update&lt;br&gt;
sudo apt-get install trivy -y&lt;br&gt;
Next, we will log in to Jenkins and start to configure our Pipeline in Jenkins&lt;/p&gt;

&lt;p&gt;Step 3 — Install Plugins like JDK, Sonarqube Scanner, NodeJs, OWASP Dependency Check&lt;br&gt;
3A — Install Plugin&lt;br&gt;
Goto Manage Jenkins →Plugins → Available Plugins →&lt;/p&gt;

&lt;p&gt;Install below plugins&lt;/p&gt;

&lt;p&gt;1 → Eclipse Temurin Installer (Install without restart)&lt;/p&gt;

&lt;p&gt;2 → SonarQube Scanner (Install without restart)&lt;/p&gt;

&lt;p&gt;3 → NodeJs Plugin (Install Without restart)&lt;/p&gt;

&lt;p&gt;3B — Configure Java and Nodejs in Global Tool Configuration&lt;br&gt;
Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save&lt;/p&gt;

&lt;p&gt;3C — Create a Job&lt;br&gt;
Create a job as Zomato Name, select pipeline and click on ok.&lt;/p&gt;

&lt;p&gt;Step 4 — Configure Sonar Server in Manage Jenkins&lt;br&gt;
Grab the Public IP Address of your EC2 Instance, Sonarqube works on Port 9000, so :9000. Goto your Sonarqube Server. Click on Administration → Security → Users → Click on Tokens and Update Token → Give it a name → and click on Generate Token&lt;/p&gt;

&lt;p&gt;click on update Token&lt;/p&gt;

&lt;p&gt;Create a token with a name and generate&lt;/p&gt;

&lt;p&gt;copy Token&lt;/p&gt;

&lt;p&gt;Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this&lt;/p&gt;

&lt;p&gt;You will this page once you click on create&lt;/p&gt;

&lt;p&gt;Now, go to Dashboard → Manage Jenkins → System and Add like the below image.&lt;/p&gt;

&lt;p&gt;Click on Apply and Save&lt;/p&gt;

&lt;p&gt;The Configure System option is used in Jenkins to configure different server&lt;/p&gt;

&lt;p&gt;Global Tool Configuration is used to configure different tools that we install using Plugins&lt;/p&gt;

&lt;p&gt;We will install a sonar scanner in the tools.&lt;/p&gt;

&lt;p&gt;In the Sonarqube Dashboard add a quality gate also&lt;/p&gt;

&lt;p&gt;Administration → Configuration →Webhooks&lt;/p&gt;

&lt;p&gt;Click on Create&lt;/p&gt;

&lt;p&gt;Add details&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;h1&gt;
  
  
  in url section of quality gate
&lt;/h1&gt;

&lt;p&gt;&lt;a href="http://jenkins-public-ip:8080" rel="noopener noreferrer"&gt;http://jenkins-public-ip:8080&lt;/a&gt;/sonarqube-webhook/&lt;/p&gt;

&lt;p&gt;Let’s go to our Pipeline and add the script in our Pipeline Script.&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
pipeline{&lt;br&gt;
    agent any&lt;br&gt;
    tools{&lt;br&gt;
        jdk 'jdk17'&lt;br&gt;
        nodejs 'node16'&lt;br&gt;
    }&lt;br&gt;
    environment {&lt;br&gt;
        SCANNER_HOME=tool 'sonar-scanner'&lt;br&gt;
    }&lt;br&gt;
    stages {&lt;br&gt;
        stage('clean workspace'){&lt;br&gt;
            steps{&lt;br&gt;
                cleanWs()&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('Checkout from Git'){&lt;br&gt;
            steps{&lt;br&gt;
                git branch: 'main', url: '&lt;a href="https://github.com/mudit097/Zomato-Clone.git" rel="noopener noreferrer"&gt;https://github.com/mudit097/Zomato-Clone.git&lt;/a&gt;'&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("Sonarqube Analysis "){&lt;br&gt;
            steps{&lt;br&gt;
                withSonarQubeEnv('sonar-server') {&lt;br&gt;
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=zomato \&lt;br&gt;
                    -Dsonar.projectKey=zomato '''&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("quality gate"){&lt;br&gt;
           steps {&lt;br&gt;
                script {&lt;br&gt;
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' &lt;br&gt;
                }&lt;br&gt;
            } &lt;br&gt;
        }&lt;br&gt;
        stage('Install Dependencies') {&lt;br&gt;
            steps {&lt;br&gt;
                sh "npm install"&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
Click on Build now, you will see the stage view like this&lt;/p&gt;

&lt;p&gt;To see the report, you can go to Sonarqube Server and go to Projects.&lt;/p&gt;

&lt;p&gt;You can see the report has been generated and the status shows as passed. You can see that there are 1.3k lines. To see a detailed report, you can go to issues.&lt;/p&gt;

&lt;p&gt;Step 5 — Install OWASP Dependency Check Plugins&lt;br&gt;
GotoDashboard → Manage Jenkins → Plugins → OWASP Dependency-Check. Click on it and install it without restart.&lt;/p&gt;

&lt;p&gt;First, we configured the Plugin and next, we had to configure the Tool&lt;/p&gt;

&lt;p&gt;Goto Dashboard → Manage Jenkins → Tools →&lt;/p&gt;

&lt;p&gt;Click on Apply and Save here.&lt;/p&gt;

&lt;p&gt;Now go configure → Pipeline and add this stage to your pipeline and build.&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    stage('OWASP FS SCAN') {
        steps {
            dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
            dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
        }
    }
    stage('TRIVY FS SCAN') {
        steps {
            sh "trivy fs . &amp;gt; trivyfs.txt"
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The stage view would look like this,&lt;/p&gt;

&lt;p&gt;You will see that in status, a graph will also be generated and Vulnerabilities.&lt;/p&gt;

&lt;p&gt;Step 6 — Docker Image Build and Push&lt;br&gt;
We need to install the Docker tool in our system, Goto Dashboard → Manage Plugins → Available plugins → Search for Docker and install these plugins&lt;/p&gt;

&lt;p&gt;Docker&lt;/p&gt;

&lt;p&gt;Docker Commons&lt;/p&gt;

&lt;p&gt;Docker Pipeline&lt;/p&gt;

&lt;p&gt;Docker API&lt;/p&gt;

&lt;p&gt;docker-build-step&lt;/p&gt;

&lt;p&gt;and click on install without restart&lt;/p&gt;

&lt;p&gt;Now, goto Dashboard → Manage Jenkins → Tools →&lt;/p&gt;

&lt;p&gt;Add DockerHub Username and Password under Global Credentials&lt;/p&gt;

&lt;p&gt;Add this stage to Pipeline Script&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    stage("Docker Build &amp;amp; Push"){
        steps{
            script{
               withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                   sh "docker build -t zomato ."
                   sh "docker tag zomato mudit097/zomato:latest "
                   sh "docker push mudit097/zomato:latest "
                }
            }
        }
    }
    stage("TRIVY"){
        steps{
            sh "trivy image mudit097/zomato:latest &amp;gt; trivy.txt" 
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will see the output below, with a dependency trend.&lt;/p&gt;

&lt;p&gt;When you log in to Dockerhub, you will see a new image is created&lt;/p&gt;

&lt;p&gt;Now Run the container to see if the app coming up or not by adding the below stage&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
stage('Deploy to container'){&lt;br&gt;
     steps{&lt;br&gt;
            sh 'docker run -d --name zomato -p 3000:3000 mudit097/zomato:latest'&lt;br&gt;
          }&lt;br&gt;
      }&lt;br&gt;
stage view&lt;/p&gt;

&lt;p&gt;&lt;a&gt;Jenkins-public-ip:3000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will get this output&lt;/p&gt;

&lt;p&gt;Step 8: Terminate instances.&lt;br&gt;
Efficiently manage resources by terminating the AWS EC2 Instances to ensure cost-effectiveness and environmental responsibility, completing the deployment lifecycle. Utilize AWS management tools or commands to gracefully shut down and terminate the Ubuntu(22.04) T2 Large Instance, concluding the deployment process while maintaining operational efficiency.&lt;/p&gt;

&lt;p&gt;Complete Pipeline&lt;/p&gt;

&lt;p&gt;Copy&lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
pipeline{&lt;br&gt;
    agent any&lt;br&gt;
    tools{&lt;br&gt;
        jdk 'jdk17'&lt;br&gt;
        nodejs 'node16'&lt;br&gt;
    }&lt;br&gt;
    environment {&lt;br&gt;
        SCANNER_HOME=tool 'sonar-scanner'&lt;br&gt;
    }&lt;br&gt;
    stages {&lt;br&gt;
        stage('clean workspace'){&lt;br&gt;
            steps{&lt;br&gt;
                cleanWs()&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('Checkout from Git'){&lt;br&gt;
            steps{&lt;br&gt;
                git branch: 'main', url: '&lt;a href="https://github.com/mudit097/Zomato-Clone.git" rel="noopener noreferrer"&gt;https://github.com/mudit097/Zomato-Clone.git&lt;/a&gt;'&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("Sonarqube Analysis "){&lt;br&gt;
            steps{&lt;br&gt;
                withSonarQubeEnv('sonar-server') {&lt;br&gt;
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=zomato \&lt;br&gt;
                    -Dsonar.projectKey=zomato '''&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("quality gate"){&lt;br&gt;
           steps {&lt;br&gt;
                script {&lt;br&gt;
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' &lt;br&gt;
                }&lt;br&gt;
            } &lt;br&gt;
        }&lt;br&gt;
        stage('Install Dependencies') {&lt;br&gt;
            steps {&lt;br&gt;
                sh "npm install"&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('OWASP FS SCAN') {&lt;br&gt;
            steps {&lt;br&gt;
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'&lt;br&gt;
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('TRIVY FS SCAN') {&lt;br&gt;
            steps {&lt;br&gt;
                sh "trivy fs . &amp;gt; trivyfs.txt"&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("Docker Build &amp;amp; Push"){&lt;br&gt;
            steps{&lt;br&gt;
                script{&lt;br&gt;
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){&lt;br&gt;&lt;br&gt;
                       sh "docker build -t zomato ."&lt;br&gt;
                       sh "docker tag zomato mudit097/zomato:latest "&lt;br&gt;
                       sh "docker push mudit097/zomato:latest "&lt;br&gt;
                    }&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage("TRIVY"){&lt;br&gt;
            steps{&lt;br&gt;
                sh "trivy image mudit097/zomato:latest &amp;gt; trivy.txt" &lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('Deploy to container'){&lt;br&gt;
            steps{&lt;br&gt;
                sh 'docker run -d --name zomato -p 3000:3000 mudit097/zomato:latest'&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
In conclusion, this comprehensive guide has walked you through the essential steps to set up a robust and efficient CI/CD pipeline on an Ubuntu 22.04 T2 Large Instance using Jenkins, Docker, and Trivy. From the initial launch of the AWS EC2 instance to the installation of critical tools such as Jenkins, Docker, and Trivy, as well as the creation of a Sonarqube container, each step has been meticulously covered.&lt;/p&gt;

&lt;p&gt;The integration of essential plugins like JDK, Sonarqube Scanner, Nodejs, and OWASP Dependency Check further enhances the pipeline’s capabilities. The creation of a Declarative Pipeline in Jenkins streamlines the development process, and the incorporation of OWASP Dependency Check Plugins fortifies security measures.&lt;/p&gt;

&lt;p&gt;The guide concludes with the crucial steps of Docker image build, push, deployment, and, ultimately, the termination of AWS EC2 instances, ensuring a seamless and controlled workflow. By following these steps, you’ve not only established a powerful CI/CD pipeline but also prioritized security and efficiency throughout the entire development lifecycle.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Playlists created and managed seamlessly with Terraform!</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Tue, 13 Aug 2024 18:40:36 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/playlists-created-and-managed-seamlessly-with-terraform-4fg3</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/playlists-created-and-managed-seamlessly-with-terraform-4fg3</guid>
      <description>&lt;p&gt;Project Overview&lt;br&gt;
This project focuses on using Terraform to create and manage multiple Spotify playlists for various occasions such as morning routines, evening relaxation, and party nights. Terraform automates the entire process, making it efficient and scalable.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;p&gt;Terraform Installed: Ensure Terraform is installed on your machine.&lt;br&gt;
Docker Installed: Make sure Docker is installed and running.&lt;br&gt;
Spotify Account: A Spotify account is required (premium access not necessary).&lt;br&gt;
Spotify Developer Account: Register and create an application to obtain the Client ID and Client Secret.&lt;br&gt;
Spotify Provider for Terraform: Install and configure the Spotify provider for Terraform.&lt;br&gt;
VS Code Editor: Recommended for editing Terraform files.&lt;br&gt;
Steps to Complete the Project&lt;/p&gt;

&lt;p&gt;Set Up Terraform Project&lt;/p&gt;

&lt;p&gt;Create a new directory for your Terraform project and navigate to it in your terminal.&lt;br&gt;
Create a file named spotifyterra.tf to define your configuration.&lt;br&gt;
Define the Spotify Provider&lt;br&gt;
terraform {&lt;br&gt;
  required_providers {&lt;br&gt;
    spotify = {&lt;br&gt;
      source = "conradludgate/spotify"&lt;br&gt;
      version = "0.2.7"&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "spotify" {&lt;br&gt;
  api_key = var.api_key&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;In the spotifyterra.tf file, define the Spotify provider. Refer to the official HashiCorp documentation for the correct provider block.&lt;br&gt;
Obtain API Credentials&lt;/p&gt;

&lt;p&gt;To interact with Spotify's API, you'll need a Client ID and Client Secret.&lt;br&gt;
Create a Spotify App&lt;/p&gt;

&lt;p&gt;Visit the Spotify Developer Dashboard.&lt;br&gt;
Log in with your Spotify account.&lt;br&gt;
Click "Create an App" and complete the required settings.&lt;br&gt;
Copy the Client ID and Client Secret keys and paste them into a .env file.&lt;br&gt;
Run the Authorization Proxy&lt;/p&gt;

&lt;p&gt;Run the following command to start the authorization process:&lt;/p&gt;

&lt;p&gt;docker run --rm -it -p 27228:27228 --env-file .env ghcr.io/conradludgate/spotify-auth-proxy&lt;/p&gt;

&lt;p&gt;After running this command, you should see a "Authorization Successful" message.&lt;br&gt;
Add Playlists&lt;/p&gt;

&lt;p&gt;Use Terraform to add a playlist by defining it in your configuration file.&lt;br&gt;
Add Multiple Playlists&lt;/p&gt;

&lt;p&gt;Utilize the data source block in Terraform to access Spotify's platform directly and create multiple playlists automatically.&lt;br&gt;
Verify Playlists Creation&lt;/p&gt;

&lt;p&gt;Check your Spotify account to see the newly created playlists.&lt;br&gt;
By following these steps, your playlists will be created and managed seamlessly with Terraform!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe36nin3fkboapraheba2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe36nin3fkboapraheba2.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>spotify</category>
      <category>playlist</category>
    </item>
    <item>
      <title>Automating Email Notifications for S3 Object Uploads Using AWS SNS</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Fri, 02 Aug 2024 04:45:27 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/automating-email-notifications-for-s3-object-uploads-using-aws-sns-2b58</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/automating-email-notifications-for-s3-object-uploads-using-aws-sns-2b58</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cloud computing, automation is no longer a luxury—it's a necessity. The ability to automate processes not only enhances operational efficiency but also strengthens monitoring and security, allowing teams to respond swiftly to changes and potential issues. Amazon Web Services (AWS) provides an extensive suite of tools that make automation both powerful and accessible. In this blog post, I’ll guide you through a practical example: setting up an automated email notification system for when objects are uploaded to an S3 bucket using AWS Simple Notification Service (SNS).&lt;/p&gt;

&lt;p&gt;Why Automate Notifications?&lt;br&gt;
Before diving into the implementation, let's consider why automating notifications is important:&lt;/p&gt;

&lt;p&gt;Real-Time Monitoring: Automated notifications provide immediate awareness of events, allowing you to stay informed without manually checking logs or dashboards.&lt;br&gt;
Enhanced Security: Being promptly notified of any changes to your storage can be crucial for detecting unauthorized uploads or other suspicious activity.&lt;br&gt;
Workflow Efficiency: Automating notifications can streamline your workflow, ensuring that relevant stakeholders are informed instantly without any manual intervention.&lt;br&gt;
Overview of the Process&lt;br&gt;
In this tutorial, we’ll walk through the following steps:&lt;/p&gt;

&lt;p&gt;Creating an S3 Bucket: The storage space where your objects will be uploaded.&lt;br&gt;
Setting Up an SNS Topic: The communication channel through which notifications will be sent.&lt;br&gt;
Subscribing via Email: Configuring the SNS topic to send notifications to an email address.&lt;br&gt;
Updating the SNS Topic Access Policy: Adjusting permissions to allow S3 to publish messages to the SNS topic.&lt;br&gt;
Configuring S3 Notifications: Setting up the S3 bucket to send notifications to the SNS topic upon object uploads.&lt;br&gt;
Step 1: Creating an S3 Bucket&lt;br&gt;
Amazon S3 (Simple Storage Service) is a highly scalable, durable, and secure object storage service. To get started:&lt;/p&gt;

&lt;p&gt;Log in to your AWS Management Console.&lt;br&gt;
Navigate to the S3 service.&lt;br&gt;
Click on the "Create Bucket" button.&lt;br&gt;
Give your bucket a unique name, select the appropriate AWS region, and configure the settings according to your requirements.&lt;br&gt;
Click "Create" to finalize the bucket creation.&lt;br&gt;
Your S3 bucket is now ready to store objects. As a best practice, consider enabling versioning and server-side encryption to enhance the security and durability of your data.&lt;/p&gt;

&lt;p&gt;Step 2: Setting Up an SNS Topic&lt;br&gt;
Amazon SNS (Simple Notification Service) is a fully managed messaging service that allows you to send messages or notifications from one application to another or to end users. To create an SNS topic:&lt;/p&gt;

&lt;p&gt;In the AWS Management Console, navigate to the SNS service.&lt;br&gt;
Click on "Create topic."&lt;br&gt;
Choose the type of topic (Standard or FIFO). For this example, we’ll use a Standard topic.&lt;br&gt;
Enter a name for your topic and configure any additional settings as needed.&lt;br&gt;
Click "Create topic" to finalize.&lt;br&gt;
Your SNS topic is now set up and ready to handle notifications. This topic will act as the communication hub, forwarding messages to subscribers whenever an event occurs.&lt;/p&gt;

&lt;p&gt;Step 3: Subscribing via Email&lt;br&gt;
Next, we need to subscribe an email address to the SNS topic so that notifications can be sent to it:&lt;/p&gt;

&lt;p&gt;After creating your topic, click on "Create subscription."&lt;br&gt;
Select the protocol as "Email."&lt;br&gt;
Enter the email address where you want to receive the notifications.&lt;br&gt;
Click "Create subscription."&lt;br&gt;
You will receive a confirmation email at the provided address. Click the confirmation link in the email to activate the subscription. Once confirmed, your email address will be successfully subscribed to the topic.&lt;/p&gt;

&lt;p&gt;Step 4: Updating the SNS Topic Access Policy&lt;br&gt;
For the SNS topic to receive notifications from S3, you need to modify the topic’s access policy:&lt;/p&gt;

&lt;p&gt;Go to the SNS topic you created and click on "Edit" under the "Access policy" section.&lt;/p&gt;

&lt;p&gt;In the access policy JSON editor, add a policy that allows S3 to publish to the SNS topic. The policy should look like this:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "Version": "2012-10-17",&lt;br&gt;
  "Statement": [&lt;br&gt;
    {&lt;br&gt;
      "Effect": "Allow",&lt;br&gt;
      "Principal": {&lt;br&gt;
        "Service": "s3.amazonaws.com"&lt;br&gt;
      },&lt;br&gt;
      "Action": "SNS:Publish",&lt;br&gt;
      "Resource": "arn:aws:sns:your-region:your-account-id:your-topic-name",&lt;br&gt;
      "Condition": {&lt;br&gt;
        "ArnLike": {&lt;br&gt;
          "aws:SourceArn": "arn:aws:s3:::your-bucket-name"&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;br&gt;
Replace your-region, your-account-id, your-topic-name, and your-bucket-name with your specific details.&lt;/p&gt;

&lt;p&gt;Save the changes to update the policy.&lt;/p&gt;

&lt;p&gt;This policy grants Amazon S3 permission to publish messages to the SNS topic when an event occurs.&lt;/p&gt;

&lt;p&gt;Step 5: Configuring S3 Notifications&lt;br&gt;
Finally, you need to configure your S3 bucket to send notifications to the SNS topic whenever an object is uploaded:&lt;/p&gt;

&lt;p&gt;Navigate to the S3 service in the AWS Management Console.&lt;br&gt;
Click on the bucket you created earlier.&lt;br&gt;
Go to the "Properties" tab and scroll down to "Event notifications."&lt;br&gt;
Click "Create event notification."&lt;br&gt;
Name your event and select the event type (e.g., "All object create events").&lt;br&gt;
In the "Send to" section, choose "SNS topic" and select the topic you created earlier.&lt;br&gt;
Save the event notification configuration.&lt;br&gt;
Your S3 bucket is now configured to send notifications to the SNS topic whenever a new object is uploaded. These notifications will then be sent to the subscribed email address.&lt;/p&gt;

&lt;p&gt;Testing the Setup&lt;br&gt;
To test the setup, upload a file to your S3 bucket. Within a few moments, you should receive an email notification indicating that a new object has been uploaded. The email will contain details such as the name of the bucket, the key of the uploaded object, and the event type.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
By integrating Amazon S3 and SNS, you can automate email notifications for object uploads, ensuring that you are always in the loop about changes in your cloud storage. This setup not only improves your monitoring capabilities but also enhances your security posture by alerting you to any unexpected uploads. As cloud environments grow more complex, automations like this become increasingly valuable for maintaining operational efficiency and security.&lt;/p&gt;

&lt;p&gt;I hope this guide helps you in setting up your own notification system. Feel free to dive into the AWS documentation for more advanced configurations, such as filtering notifications based on object key prefixes or suffixes.&lt;/p&gt;

&lt;p&gt;Excited to hear your thoughts and discuss more about cloud deployment strategies and automation. Feel free to leave comments or reach out if you have any questions!&lt;/p&gt;

&lt;p&gt;This blog post provides a comprehensive guide to setting up automated email notifications for S3 object uploads using AWS SNS, with detailed steps and explanations to help readers understand and implement the process.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>s3</category>
      <category>sns</category>
    </item>
    <item>
      <title>Deploying a Static Website on Amazon S3 with Terraform!</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Fri, 26 Jul 2024 06:47:17 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/deploying-a-static-website-on-amazon-s3-with-terraform-11dc</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/deploying-a-static-website-on-amazon-s3-with-terraform-11dc</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
Static websites are a simple and effective way to present content without the need for server-side processing. Amazon S3 provides a robust platform for hosting these websites, ensuring high availability and scalability. Terraform, an Infrastructure as Code (IaC) tool, can automate the creation and management of your AWS resources, making the deployment process even more streamlined.&lt;/p&gt;

&lt;p&gt;In this guide, we will walk through the process of hosting a static website on Amazon S3 using Terraform, leveraging a modular file structure for clarity and ease of management. By the end of this tutorial, you will have a fully functional static website hosted on Amazon S3, managed entirely through Terraform.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
Ensure you have the following prerequisites before starting:&lt;/p&gt;

&lt;p&gt;AWS Account: Sign up for an AWS account if you don’t have one.&lt;br&gt;
Terraform: Download and install Terraform from the official website.&lt;br&gt;
AWS CLI: Install the AWS CLI by following the instructions here.&lt;br&gt;
AWS Credentials: Configure your AWS CLI with your credentials by running aws configure.&lt;/p&gt;

&lt;p&gt;Step 1: Create the Project Directory&lt;br&gt;
Begin by creating a directory for your Terraform project and navigating into it.&lt;/p&gt;

&lt;p&gt;mkdir my-static-website&lt;br&gt;
cd my-static-website&lt;/p&gt;

&lt;p&gt;Step 2: Define the Terraform Configuration&lt;br&gt;
Create a file named terraform.tf and define your provider configuration to set up Terraform with the AWS provider.&lt;/p&gt;

&lt;h1&gt;
  
  
  terraform.tf
&lt;/h1&gt;

&lt;p&gt;terraform {&lt;br&gt;
  required_version = "1.8.5"&lt;br&gt;
  required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
      source = "hashicorp/aws"&lt;br&gt;
      version = "5.40.0"&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  profile = "default"&lt;br&gt;
  region  = "us-east-1"&lt;br&gt;
}&lt;br&gt;
Step 3: Create the S3 Bucket&lt;br&gt;
Create a file named bucket.tf to define your S3 bucket and upload an index.html file to it.&lt;/p&gt;

&lt;h1&gt;
  
  
  bucket.tf
&lt;/h1&gt;

&lt;p&gt;resource "aws_s3_bucket" "terraform_demo_bucket" {&lt;br&gt;
  bucket = "terraform-demo-1808"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_s3_object" "index_file" {&lt;br&gt;
  bucket       = aws_s3_bucket.terraform_demo_bucket.id&lt;br&gt;
  key          = "index.html"&lt;br&gt;
  source       = "index.html"&lt;br&gt;
  content_type = "text/html"&lt;br&gt;
  etag         = filemd5("index.html")&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket_website_configuration" "website_config" {&lt;br&gt;
  bucket = aws_s3_bucket.terraform_demo_bucket.id&lt;/p&gt;

&lt;p&gt;index_document {&lt;br&gt;
    suffix = "index.html"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
Step 4: Set Up Bucket Policies&lt;br&gt;
Create a file named policy.tf to define your S3 bucket policies to allow public access.&lt;/p&gt;

&lt;h1&gt;
  
  
  policy.tf
&lt;/h1&gt;

&lt;p&gt;resource "aws_s3_bucket_public_access_block" "public_access_block" {&lt;br&gt;
  bucket               = aws_s3_bucket.terraform_demo_bucket.id&lt;br&gt;
  block_public_acls    = false&lt;br&gt;
  block_public_policy  = false&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket_policy" "bucket_policy" {&lt;br&gt;
  bucket = aws_s3_bucket.terraform_demo_bucket.id&lt;/p&gt;

&lt;p&gt;policy = jsonencode({&lt;br&gt;
    Version = "2012-10-17"&lt;br&gt;
    Statement = [&lt;br&gt;
      {&lt;br&gt;
        Sid       = "PublicReadGetObject"&lt;br&gt;
        Effect    = "Allow"&lt;br&gt;
        Principal = "&lt;em&gt;"&lt;br&gt;
        Action    = ["s3:GetObject"]&lt;br&gt;
        Resource  = "${aws_s3_bucket.terraform_demo_bucket.arn}/&lt;/em&gt;"&lt;br&gt;
      },&lt;br&gt;
    ]&lt;br&gt;
  })&lt;br&gt;
  depends_on = [aws_s3_bucket_public_access_block.public_access_block]&lt;br&gt;
}&lt;br&gt;
Step 5: Configure the Output&lt;br&gt;
Create a file named output.tf to define the output variable for your website’s URL.&lt;/p&gt;

&lt;h1&gt;
  
  
  output.tf
&lt;/h1&gt;

&lt;p&gt;output "website_url" {&lt;br&gt;
  value = "http://${aws_s3_bucket.terraform_demo_bucket.bucket}.s3-website.${aws_s3_bucket.terraform_demo_bucket.region}.amazonaws.com"&lt;br&gt;
}&lt;br&gt;
Step 6: Initialize Terraform&lt;br&gt;
Initialize Terraform to prepare the working directory for managing infrastructure. This command downloads and installs the required provider plugins.&lt;/p&gt;

&lt;p&gt;terraform init&lt;/p&gt;

&lt;p&gt;Step 7: Validate the Configuration&lt;br&gt;
Validate the Terraform configuration files to ensure the syntax is correct.&lt;/p&gt;

&lt;p&gt;terraform validate&lt;/p&gt;

&lt;p&gt;Step 8: Plan the Deployment&lt;br&gt;
Generate and review the execution plan to understand the changes Terraform will apply.&lt;/p&gt;

&lt;p&gt;terraform plan&lt;/p&gt;

&lt;p&gt;Step 9: Apply the Configuration&lt;br&gt;
Apply the Terraform configuration to create the infrastructure.&lt;/p&gt;

&lt;p&gt;terraform apply&lt;/p&gt;

&lt;p&gt;Step 10: Access Your Website&lt;br&gt;
After the apply process completes, Terraform will output your website's URL. Visit the URL to see your static website live.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Congratulations! You have successfully hosted a static website on Amazon S3 using Terraform. This approach ensures that your infrastructure is version-controlled and easily reproducible. By following this guide, you can quickly deploy static websites for various purposes, such as personal blogs, portfolios, or documentation sites. Explore the power of Infrastructure as Code with Terraform and elevate your web hosting experience!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>s3</category>
      <category>website</category>
    </item>
    <item>
      <title>My project-Sentiment analysis</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Wed, 24 Jul 2024 14:55:25 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-sentiment-analysis-33fk</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-sentiment-analysis-33fk</guid>
      <description>&lt;p&gt;IDEA &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p96h8h6vjb6a9u0o87n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p96h8h6vjb6a9u0o87n.png" alt="Image description" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TECH STACK/ARCHITECTURE&lt;br&gt;
DESCRIPTION&lt;br&gt;
• The workflow begins with the creation  of text files containing the data to be  analyzed.&lt;br&gt;
• reads the text files, processes them  using Amazon Comprehend APIs, and  generates insights such as language,  sentiment, and key phrases.&lt;br&gt;
• The results of the analysis can be stored  in a database or sent to other AWS  services for further processing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Project: Implementing Amazon Macie with EventBridge and SNS</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Wed, 24 Jul 2024 14:29:02 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-implementing-amazon-macie-with-eventbridge-and-sns-3h7h</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-implementing-amazon-macie-with-eventbridge-and-sns-3h7h</guid>
      <description>&lt;p&gt;Project Introduction&lt;/p&gt;

&lt;p&gt;In this project we are implementing the functionality of Amazon Macie. Amazon Macie is a machine learning service which can automatically evaluate any data stored in s3, identify and take action based on sensitive data.&lt;br&gt;
• We will be creating a discovery job to identify findings within Macie.&lt;br&gt;
• Findings will be identified using Managed data identifiers and/or Custom data identifier.&lt;br&gt;
• A SNS topic is created that will have the configuration of valid publishers and subscribers to the topic.&lt;br&gt;
• A subscription is created for the above SNS topic with email endpoint.&lt;br&gt;
• A ‘pattern form event’ is created in EventBridge with event source as AWS Macie and Target as SNS topic.&lt;br&gt;
Steps &amp;amp; Workflow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Uploading sensitive data to s3&lt;/li&gt;
&lt;li&gt; Create a discovery job in Macie using Managed Data Identifier&lt;/li&gt;
&lt;li&gt; Create SNS Topic for notifications&lt;/li&gt;
&lt;li&gt; Create an EventBridge rule whenever Macie has any findings&lt;/li&gt;
&lt;li&gt; Create Custom Data Identifier
Demo&lt;/li&gt;
&lt;li&gt;Uploading sensitive data to s3
Let’s begin by creating a new bucket in s3 and upload the data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Upload the text files which contains sensitive data&lt;/p&gt;

&lt;p&gt;accesscredentials.txt&lt;br&gt;
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br&gt;
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br&gt;
AWS_SESSION_TOKEN=AQoDYXdzEPT//////////wEXAMPLEtc764bNrC9SAPBSM22wDOk4x4HIZ8j4FZTwdQWLWsKWHGBuFqwAeMicRXmxfpSPfIeoIYRqTflfKD8YUuwthAx7mSEI/qkPpKPi/kMcGdQrmGdeehM4IC1NtBmUpp2wUE8phUZampKsburEDy0KPkyQDYwT7WZ0wq5VSXDvp75YU9HFvlRd8Tx6q6fE8YQcHNVXAkiY9q6d+xo0rKwT38xVqr7ZD0u0iPPkUL64lIZbqBAz+scqKmlzm8FDrypNC9Yjc8fPOLn9FX9KSYvKTr4rvx3iSIlTJabIQwj2ICCR/oLxBA==&lt;br&gt;
github_key: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
github_api_key: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
github_secret: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
creditcards.txt&lt;br&gt;
American Express&lt;br&gt;
5135725008183484 09/26&lt;br&gt;
CVE: 550&lt;/p&gt;

&lt;p&gt;American Express&lt;br&gt;
347965534580275 05/24&lt;br&gt;
CCV: 4758&lt;/p&gt;

&lt;p&gt;Mastercard&lt;br&gt;
5105105105105100&lt;br&gt;
Exp: 01/27&lt;br&gt;
Security code: 912&lt;br&gt;
customdata.txt (Australian license plates)&lt;/p&gt;

&lt;h1&gt;
  
  
  Victoria
&lt;/h1&gt;

&lt;p&gt;1BE8BE&lt;br&gt;
ABC123&lt;br&gt;
DEF-456&lt;/p&gt;

&lt;h1&gt;
  
  
  New South Wales
&lt;/h1&gt;

&lt;p&gt;AO31BE&lt;br&gt;
AO-15-EB&lt;br&gt;
BU-60-UB&lt;/p&gt;

&lt;h1&gt;
  
  
  Queensland
&lt;/h1&gt;

&lt;p&gt;123ABC&lt;br&gt;
000ZZZ&lt;br&gt;
987-YXW&lt;br&gt;
employeedata.txt&lt;br&gt;
74323 Julie Field&lt;br&gt;
Lake Joshuamouth, OR 30055-3905&lt;br&gt;
1-196-191-4438x974&lt;br&gt;
53001 Paul Union&lt;br&gt;
New John, HI 94740&lt;br&gt;
Amanda Wells&lt;/p&gt;

&lt;p&gt;354-70-6172&lt;br&gt;
242 George Plaza&lt;br&gt;
East Lawrencefurt, VA 37287-7620&lt;br&gt;
GB73WAUS0628038988364&lt;br&gt;
587 Silva Village&lt;br&gt;
Pearsonburgh, NM 11616-7231&lt;br&gt;
LDNM1948227117807&lt;br&gt;
Brett Garza&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a discovery job in Macie using Managed Data Identifier
We are now creating a job to analyze the data in s3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Select Managed data identifier as ‘Recommended’ i.e. All managed data identifiers that AWS recommends.&lt;/p&gt;

&lt;p&gt;At this stage we are not adding any custom data identifiers. We will be doing that later in the project&lt;/p&gt;

&lt;p&gt;The discovery job is now created&lt;/p&gt;

&lt;p&gt;Click on the job → Show results → Show findings.&lt;/p&gt;

&lt;p&gt;We will be able to see the type of sensitive data that has been identified by Macie. In our case, it is personal, credentials and financials.&lt;br&gt;
Notice that license plates information is not flagged. We need to create custom data identifier for Macie to flag that content. We will be doing that in step 5. In the next step we are setting up a notification so that we get notified whenever Macie has identified a data as sensitive.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create SNS Topic for notifications
Navigate to SNS console and create a SNS topic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we need to create a subscription for this SNS topic.&lt;/p&gt;

&lt;p&gt;The subscription is now created:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an EventBridge rule whenever Macie has any findings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Select the target for the event. In our case it is SNS topic. Whenever Macie identifies any sensitive data, this event bridge rule will be triggered and it will send that event through SNS and we will be notified (since we subscribed to the SNS topic).&lt;/p&gt;

&lt;p&gt;The rule is now enabled&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Custom Data Identifier
Now lets create a custom data identifier to identify the below data. If we notice the below data is not flagged earlier by Macie.
customdata.txt (Australian license plates)
# Victoria
1BE8BE
ABC123
DEF-456
# New South Wales
AO31BE
AO-15-EB
BU-60-UB
# Queensland
123ABC
000ZZZ
987-YXW
Navigate to Macie → Custom Data Identifier&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the regular expression type the below expression. This is the expression that will identify Australian license plates&lt;br&gt;
([0-9][a-zA-Z][a-zA-Z]-?[0-9][a-zA-Z][a-zA-Z])|([a-zA-Z][a-zA-Z][a-zA-Z]-?[0-9][0-9][0-9])|([a-zA-Z][a-zA-Z]-?[0-9][0-9]-?[a-zA-Z][a-zA-Z])|([0-9][0-9][0-9]-?[a-zA-Z][a-zA-Z][a-zA-Z])|([0-9][0-9][0-9]-?[0-9][a-zA-Z][a-zA-Z])&lt;br&gt;
Now, lets configure a new job and select the created custom data identifier for Macie to identify Australian License plates&lt;/p&gt;

&lt;p&gt;The job is now created&lt;/p&gt;

&lt;p&gt;If we navigate to show findings we can notice that the license plates data is now flagged by Macie&lt;/p&gt;

&lt;p&gt;We also received an email notification via EventBridge&lt;/p&gt;

&lt;p&gt;Thanks for reading the article.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Project: Implementing Amazon Macie with EventBridge and SNS</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Wed, 24 Jul 2024 14:29:02 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-implementing-amazon-macie-with-eventbridge-and-sns-26cf</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/my-project-implementing-amazon-macie-with-eventbridge-and-sns-26cf</guid>
      <description>&lt;p&gt;Project Introduction&lt;/p&gt;

&lt;p&gt;In this project we are implementing the functionality of Amazon Macie. Amazon Macie is a machine learning service which can automatically evaluate any data stored in s3, identify and take action based on sensitive data.&lt;br&gt;
• We will be creating a discovery job to identify findings within Macie.&lt;br&gt;
• Findings will be identified using Managed data identifiers and/or Custom data identifier.&lt;br&gt;
• A SNS topic is created that will have the configuration of valid publishers and subscribers to the topic.&lt;br&gt;
• A subscription is created for the above SNS topic with email endpoint.&lt;br&gt;
• A ‘pattern form event’ is created in EventBridge with event source as AWS Macie and Target as SNS topic.&lt;br&gt;
Steps &amp;amp; Workflow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Uploading sensitive data to s3&lt;/li&gt;
&lt;li&gt; Create a discovery job in Macie using Managed Data Identifier&lt;/li&gt;
&lt;li&gt; Create SNS Topic for notifications&lt;/li&gt;
&lt;li&gt; Create an EventBridge rule whenever Macie has any findings&lt;/li&gt;
&lt;li&gt; Create Custom Data Identifier
Demo&lt;/li&gt;
&lt;li&gt;Uploading sensitive data to s3
Let’s begin by creating a new bucket in s3 and upload the data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Upload the text files which contains sensitive data&lt;/p&gt;

&lt;p&gt;accesscredentials.txt&lt;br&gt;
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br&gt;
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br&gt;
AWS_SESSION_TOKEN=AQoDYXdzEPT//////////wEXAMPLEtc764bNrC9SAPBSM22wDOk4x4HIZ8j4FZTwdQWLWsKWHGBuFqwAeMicRXmxfpSPfIeoIYRqTflfKD8YUuwthAx7mSEI/qkPpKPi/kMcGdQrmGdeehM4IC1NtBmUpp2wUE8phUZampKsburEDy0KPkyQDYwT7WZ0wq5VSXDvp75YU9HFvlRd8Tx6q6fE8YQcHNVXAkiY9q6d+xo0rKwT38xVqr7ZD0u0iPPkUL64lIZbqBAz+scqKmlzm8FDrypNC9Yjc8fPOLn9FX9KSYvKTr4rvx3iSIlTJabIQwj2ICCR/oLxBA==&lt;br&gt;
github_key: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
github_api_key: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
github_secret: c8a2f31d8daeb219f623f484f1d0fa73ae6b4b5a&lt;br&gt;
creditcards.txt&lt;br&gt;
American Express&lt;br&gt;
5135725008183484 09/26&lt;br&gt;
CVE: 550&lt;/p&gt;

&lt;p&gt;American Express&lt;br&gt;
347965534580275 05/24&lt;br&gt;
CCV: 4758&lt;/p&gt;

&lt;p&gt;Mastercard&lt;br&gt;
5105105105105100&lt;br&gt;
Exp: 01/27&lt;br&gt;
Security code: 912&lt;br&gt;
customdata.txt (Australian license plates)&lt;/p&gt;

&lt;h1&gt;
  
  
  Victoria
&lt;/h1&gt;

&lt;p&gt;1BE8BE&lt;br&gt;
ABC123&lt;br&gt;
DEF-456&lt;/p&gt;

&lt;h1&gt;
  
  
  New South Wales
&lt;/h1&gt;

&lt;p&gt;AO31BE&lt;br&gt;
AO-15-EB&lt;br&gt;
BU-60-UB&lt;/p&gt;

&lt;h1&gt;
  
  
  Queensland
&lt;/h1&gt;

&lt;p&gt;123ABC&lt;br&gt;
000ZZZ&lt;br&gt;
987-YXW&lt;br&gt;
employeedata.txt&lt;br&gt;
74323 Julie Field&lt;br&gt;
Lake Joshuamouth, OR 30055-3905&lt;br&gt;
1-196-191-4438x974&lt;br&gt;
53001 Paul Union&lt;br&gt;
New John, HI 94740&lt;br&gt;
Amanda Wells&lt;/p&gt;

&lt;p&gt;354-70-6172&lt;br&gt;
242 George Plaza&lt;br&gt;
East Lawrencefurt, VA 37287-7620&lt;br&gt;
GB73WAUS0628038988364&lt;br&gt;
587 Silva Village&lt;br&gt;
Pearsonburgh, NM 11616-7231&lt;br&gt;
LDNM1948227117807&lt;br&gt;
Brett Garza&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a discovery job in Macie using Managed Data Identifier
We are now creating a job to analyze the data in s3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Select Managed data identifier as ‘Recommended’ i.e. All managed data identifiers that AWS recommends.&lt;/p&gt;

&lt;p&gt;At this stage we are not adding any custom data identifiers. We will be doing that later in the project&lt;/p&gt;

&lt;p&gt;The discovery job is now created&lt;/p&gt;

&lt;p&gt;Click on the job → Show results → Show findings.&lt;/p&gt;

&lt;p&gt;We will be able to see the type of sensitive data that has been identified by Macie. In our case, it is personal, credentials and financials.&lt;br&gt;
Notice that license plates information is not flagged. We need to create custom data identifier for Macie to flag that content. We will be doing that in step 5. In the next step we are setting up a notification so that we get notified whenever Macie has identified a data as sensitive.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create SNS Topic for notifications
Navigate to SNS console and create a SNS topic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we need to create a subscription for this SNS topic.&lt;/p&gt;

&lt;p&gt;The subscription is now created:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an EventBridge rule whenever Macie has any findings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Select the target for the event. In our case it is SNS topic. Whenever Macie identifies any sensitive data, this event bridge rule will be triggered and it will send that event through SNS and we will be notified (since we subscribed to the SNS topic).&lt;/p&gt;

&lt;p&gt;The rule is now enabled&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Custom Data Identifier
Now lets create a custom data identifier to identify the below data. If we notice the below data is not flagged earlier by Macie.
customdata.txt (Australian license plates)
# Victoria
1BE8BE
ABC123
DEF-456
# New South Wales
AO31BE
AO-15-EB
BU-60-UB
# Queensland
123ABC
000ZZZ
987-YXW
Navigate to Macie → Custom Data Identifier&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the regular expression type the below expression. This is the expression that will identify Australian license plates&lt;br&gt;
([0-9][a-zA-Z][a-zA-Z]-?[0-9][a-zA-Z][a-zA-Z])|([a-zA-Z][a-zA-Z][a-zA-Z]-?[0-9][0-9][0-9])|([a-zA-Z][a-zA-Z]-?[0-9][0-9]-?[a-zA-Z][a-zA-Z])|([0-9][0-9][0-9]-?[a-zA-Z][a-zA-Z][a-zA-Z])|([0-9][0-9][0-9]-?[0-9][a-zA-Z][a-zA-Z])&lt;br&gt;
Now, lets configure a new job and select the created custom data identifier for Macie to identify Australian License plates&lt;/p&gt;

&lt;p&gt;The job is now created&lt;/p&gt;

&lt;p&gt;If we navigate to show findings we can notice that the license plates data is now flagged by Macie&lt;/p&gt;

&lt;p&gt;We also received an email notification via EventBridge&lt;/p&gt;

&lt;p&gt;Thanks for reading the article.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>"Connecting to an EC2 Instance Using SSH: Easy Steps to Access Your Instance"</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Thu, 04 Jul 2024 06:30:01 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/connecting-to-an-ec2-instance-using-ssh-easy-steps-to-access-your-instance-dhk</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/connecting-to-an-ec2-instance-using-ssh-easy-steps-to-access-your-instance-dhk</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ye72yltdujli3o2nbg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ye72yltdujli3o2nbg3.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gather EC2 Instance Details&lt;br&gt;
Public DNS (or Public IP): Identify the Public DNS (or IP address) of your EC2 instance. You can find this in the AWS Management Console under the Instances section.&lt;br&gt;
Key Pair File (.pem): Ensure you have the private key (.pem file) that was used when launching the instance. If you don’t have it, you might need to create a new key pair and associate it with your instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Permissions for Your Key Pair File&lt;br&gt;
Ensure that the permissions on your key pair file are set correctly to maintain security. In a terminal or command prompt, use the following command:&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
chmod 400 /path/to/your-key-pair.pem&lt;br&gt;
Replace /path/to/your-key-pair.pem with the actual path to your key pair file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open Your Terminal or Command Prompt&lt;br&gt;
Open a terminal window (Linux or macOS) or a command prompt (Windows).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect Using SSH&lt;br&gt;
Use the ssh command to connect to your EC2 instance. The syntax is:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;bash&lt;br&gt;
Copy code&lt;br&gt;
ssh -i /path/to/your-key-pair.pem ec2-user@your-instance-public-dns&lt;br&gt;
Replace:&lt;br&gt;
/path/to/your-key-pair.pem: Path to your .pem file.&lt;br&gt;
ec2-user: Username for your instance (can vary by operating system; for example, ubuntu for Ubuntu instances).&lt;br&gt;
your-instance-public-dns: Public DNS (or IP address) of your EC2 instance.&lt;br&gt;
For example:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
Copy code&lt;br&gt;
ssh -i ~/Downloads/your-key-pair.pem &lt;a href="mailto:ec2-user@ec2-11-22-33-44.compute-1.amazonaws.com"&gt;ec2-user@ec2-11-22-33-44.compute-1.amazonaws.com&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Authenticate and Connect&lt;br&gt;
If it’s your first time connecting to this instance, you may see a message about the authenticity of the host. Type yes to continue connecting.&lt;br&gt;
You should now be connected to your EC2 instance via SSH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Post-Connection Tasks&lt;br&gt;
Once connected, you can execute commands on your EC2 instance terminal just like you would on a local terminal.&lt;br&gt;
Troubleshooting Tips:&lt;br&gt;
Security Group Settings: Ensure that your EC2 instance’s security group allows SSH access (port 22) from your current IP address or IP range.&lt;br&gt;
Instance State: Verify that your EC2 instance is running and reachable over the network.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This detailed guide should help you connect to your EC2 instance securely using SSH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju83tqld4otcxg7s9oo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju83tqld4otcxg7s9oo1.png" alt="Image description" width="779" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>learning</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>A DEEP DIVE INTO TERRAFORM</title>
      <dc:creator>Vishal Raju</dc:creator>
      <pubDate>Thu, 27 Jun 2024 17:13:03 +0000</pubDate>
      <link>https://forem.com/vishal_raju_6a7ca9503a75b/a-deep-dive-into-terraform-b79</link>
      <guid>https://forem.com/vishal_raju_6a7ca9503a75b/a-deep-dive-into-terraform-b79</guid>
      <description>&lt;p&gt;What is Infrastructure as Code with Terraform?&lt;br&gt;
Getting Started with Terraform on AWS&lt;br&gt;
Infrastructure as Code (IaC) lets you manage infrastructure with configuration files. Terraform, HashiCorp's IaC tool, offers several &lt;br&gt;
advantages as follows:&lt;br&gt;
• Multi-Cloud Management: Manage resources across AWS, Azure, GCP, etc.&lt;br&gt;
• Declarative Language: Write and maintain infrastructure code easily.&lt;br&gt;
• State Management: Track resource changes with Terraform's state file.&lt;br&gt;
• Version Control: Safely collaborate using version control systems.&lt;/p&gt;

&lt;p&gt;Terraform Workflow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Scope: Identify infrastructure needs.&lt;/li&gt;
&lt;li&gt; Author: Write configuration files.&lt;/li&gt;
&lt;li&gt; Initialize: Install necessary plugins.&lt;/li&gt;
&lt;li&gt; Plan: Preview changes.&lt;/li&gt;
&lt;li&gt; Apply: Implement the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Collaboration and Tracking&lt;br&gt;
• State File: Acts as the source of truth for your infrastructure.&lt;br&gt;
• HCP Terraform: Share state securely, prevent race conditions, and integrate with VCS like GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F393l1da3xh77pig2wt6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F393l1da3xh77pig2wt6c.png" alt="Image description" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1)Install Terraform&lt;br&gt;
Creating Your First AWS EC2 Instance with Terraform&lt;br&gt;
To get started with Terraform and AWS, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Prerequisites:
o   Install Terraform CLI (1.2.0+) and AWS CLI.
o   Have an AWS account with credentials ready.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set AWS Credentials:&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
$ export AWS_ACCESS_KEY_ID=&lt;br&gt;
$ export AWS_SECRET_ACCESS_KEY=&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write Configuration:&lt;br&gt;
o   Create a directory and main.tf file:&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
$ mkdir learn-terraform-aws-instance&lt;br&gt;
$ cd learn-terraform-aws-instance&lt;br&gt;
$ touch main.tf&lt;br&gt;
o   Paste the configuration into main.tf:&lt;br&gt;
hcl&lt;br&gt;
Copy code&lt;br&gt;
terraform {&lt;br&gt;
required_providers {&lt;br&gt;
aws = {&lt;br&gt;
  source  = "hashicorp/aws"&lt;br&gt;
  version = "~&amp;gt; 4.16"&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;required_version = "&amp;gt;= 1.2.0"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  region = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "app_server" {&lt;br&gt;
  ami           = "ami-830c94e3"&lt;br&gt;
  instance_type = "t2.micro"&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "ExampleAppServerInstance"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Apply Configuration:
bash
Copy code
$ terraform init
$ terraform apply&lt;/li&gt;
&lt;li&gt; Inspect State:
bash
Copy code
$ terraform show
That's it! You've now created your first AWS EC2 instance using Terraform. Explore further by modifying configurations and diving deeper into Terraform's capabilities. Happy provisioning!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2)Change infrastructure&lt;br&gt;
Prerequisites&lt;br&gt;
Ensure you have:&lt;br&gt;
• Terraform CLI (1.2.0+) installed.&lt;br&gt;
• AWS CLI configured with a default profile.&lt;br&gt;
Setting Up Your Project&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create Directory and Configuration File:
Start by creating a directory and main.tf file:
bash
Copy code
$ mkdir learn-terraform-aws-instance
$ cd learn-terraform-aws-instance
$ touch main.tf&lt;/li&gt;
&lt;li&gt; Configure main.tf:
Add AWS instance configuration to main.tf:
hcl
Copy code
terraform {
required_providers {
aws = {
  source  = "hashicorp/aws"
  version = "~&amp;gt; 4.16"
}
}&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;required_version = "&amp;gt;= 1.2.0"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "app_server" {&lt;br&gt;
  ami           = "ami-830c94e3"&lt;br&gt;
  instance_type = "t2.micro"&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "ExampleAppServerInstance"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Apply Configuration:
Initialize and apply your configuration:
bash
Copy code
$ terraform init
$ terraform apply&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3)Updating Infrastructure&lt;br&gt;
To update instance configuration (e.g., change AMI):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Modify main.tf:
Update ami under aws_instance.app_server:
diff
Copy code
resource "aws_instance" "app_server" {&lt;/li&gt;
&lt;li&gt; ami           = "ami-830c94e3"&lt;/li&gt;
&lt;li&gt; ami           = "ami-08d70e59c07c61a3a" // New AMI ID
instance_type = "t2.micro"&lt;/li&gt;
&lt;li&gt; Apply Changes:
Apply changes to update the instance:
bash
Copy code
$ terraform apply
Execution Plan
• Terraform's plan (terraform apply) shows actions like creating new resources or updating existing ones.
• Changing the AMI forces recreation (-/+ destroy and then create replacement) due to AWS constraints.
Conclusion
Terraform simplifies AWS resource management with automation and consistency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;4)Destroy infrastructure&lt;br&gt;
Managing Infrastructure Lifecycle with Terraform&lt;br&gt;
In this tutorial, you've learned how to create and update an EC2 instance on AWS using Terraform. Now, let's explore how to destroy resources when they are no longer needed.&lt;br&gt;
Why Destroy?&lt;br&gt;
• Cost Reduction: Stop paying for unused resources.&lt;br&gt;
• Security: Minimize exposure by removing unnecessary components.&lt;br&gt;
Destroying Resources&lt;br&gt;
To destroy managed resources:&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
$ terraform destroy&lt;br&gt;
Execution Plan&lt;br&gt;
Terraform outlines what will be destroyed:&lt;br&gt;
text&lt;br&gt;
Copy code&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;destroy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform will perform the following actions:&lt;/p&gt;

&lt;p&gt;# aws_instance.app_server will be destroyed&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;resource "aws_instance" "app_server" {

&lt;ul&gt;
&lt;li&gt;ami  = "ami-08d70e59c07c61a3a" -&amp;gt; null&lt;/li&gt;
&lt;li&gt;arn  = "arn:aws:ec2:us-west-2:561656980159:instance/i-0fd4a35969bd21710" -&amp;gt; null
##...&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Plan: 0 to add, 0 to change, 1 to destroy.&lt;br&gt;
Confirm and Execute&lt;br&gt;
Terraform requires confirmation before proceeding:&lt;br&gt;
text&lt;br&gt;
Copy code&lt;br&gt;
Do you really want to destroy all resources?&lt;br&gt;
  Terraform will destroy all your managed infrastructure, as shown above.&lt;br&gt;
  There is no undo. Only 'yes' will be accepted to confirm.&lt;/p&gt;

&lt;p&gt;Enter a value:&lt;br&gt;
Finalization&lt;br&gt;
Once confirmed, Terraform begins destroying the resources:&lt;br&gt;
text&lt;br&gt;
Copy code&lt;br&gt;
aws_instance.app_server: Destroying... [id=i-0fd4a35969bd21710]&lt;br&gt;
aws_instance.app_server: Destruction complete after 31s&lt;/p&gt;

&lt;p&gt;Destroy complete! Resources: 1 destroyed.&lt;br&gt;
Conclusion&lt;br&gt;
By following these steps, you've seen how Terraform efficiently manages the lifecycle of your cloud infrastructure, ensuring cost-effectiveness and security. &lt;/p&gt;

&lt;p&gt;5)Define input variables&lt;br&gt;
Streamlining Infrastructure Management with Terraform Variables&lt;br&gt;
In this tutorial, you'll optimize your Terraform setup by introducing variables for more flexible infrastructure configuration.&lt;br&gt;
Prerequisites&lt;br&gt;
Ensure:&lt;br&gt;
• Terraform CLI (1.2.0+) is installed.&lt;br&gt;
• AWS CLI is configured with a default profile.&lt;br&gt;
• Directory learn-terraform-aws-instance exists with main.tf configured as specified.&lt;br&gt;
Configuring Variables&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create variables.tf: Define a instance_name variable to customize the EC2 instance's Name tag:
hcl
Copy code
variable "instance_name" {
description = "Name tag for the EC2 instance"
type        = string
default     = "ExampleAppServerInstance"
}&lt;/li&gt;
&lt;li&gt; Update main.tf: Modify the aws_instance resource to utilize the instance_name variable:
hcl
Copy code
resource "aws_instance" "app_server" {
ami           = "ami-08d70e59c07c61a3a"
instance_type = "t2.micro"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = var.instance_name&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
Applying Configuration&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Apply: Initialize Terraform and apply the configuration:
bash
Copy code
$ terraform init
$ terraform apply&lt;/li&gt;
&lt;li&gt; Customize Instance Name: Override the default instance name using -var flag during apply:
bash
Copy code
$ terraform apply -var "instance_name=YetAnotherName"
Verification
• Terraform presents an execution plan before applying changes for clarity and safety.
• Confirm changes when prompted, observing Terraform's efficient handling of resource updates.
Conclusion
By using Terraform variables, you've enhanced your infrastructure's adaptability and reduced configuration repetition. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;6)Query data with outputs&lt;br&gt;
Streamlining Terraform with Output Values&lt;br&gt;
In this guide, we'll maximize Terraform's capabilities by utilizing output values to extract essential information about our AWS infrastructure.&lt;br&gt;
Prerequisites&lt;br&gt;
Ensure:&lt;br&gt;
• Terraform CLI (1.2.0+) is installed.&lt;br&gt;
• AWS CLI is configured with a default profile.&lt;br&gt;
• You have a directory named learn-terraform-aws-instance with configured main.tf and variables.tf.&lt;br&gt;
Initial Setup&lt;br&gt;
Recap your current configuration in main.tf and variables.tf:&lt;br&gt;
hcl&lt;br&gt;
Copy code&lt;/p&gt;

&lt;h1&gt;
  
  
  main.tf
&lt;/h1&gt;

&lt;p&gt;terraform {&lt;br&gt;
  required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
      source  = "hashicorp/aws"&lt;br&gt;
      version = "~&amp;gt; 4.16"&lt;br&gt;
    }&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;required_version = "&amp;gt;= 1.2.0"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "app_server" {&lt;br&gt;
  ami           = "ami-08d70e59c07c61a3a"&lt;br&gt;
  instance_type = "t2.micro"&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = var.instance_name&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  variables.tf
&lt;/h1&gt;

&lt;p&gt;variable "instance_name" {&lt;br&gt;
  description = "Name tag for the EC2 instance"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "ExampleAppServerInstance"&lt;br&gt;
}&lt;br&gt;
Defining Outputs&lt;br&gt;
Create outputs.tf to specify outputs for the instance's ID and public IP:&lt;br&gt;
hcl&lt;br&gt;
Copy code&lt;/p&gt;

&lt;h1&gt;
  
  
  outputs.tf
&lt;/h1&gt;

&lt;p&gt;output "instance_id" {&lt;br&gt;
  description = "ID of the EC2 instance"&lt;br&gt;
  value       = aws_instance.app_server.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output "instance_public_ip" {&lt;br&gt;
  description = "Public IP address of the EC2 instance"&lt;br&gt;
  value       = aws_instance.app_server.public_ip&lt;br&gt;
}&lt;br&gt;
Applying Configuration&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Apply Configuration:
bash
Copy code
$ terraform init
$ terraform apply&lt;/li&gt;
&lt;li&gt; Inspect Output Values: Upon applying, Terraform displays outputs such as instance_id and instance_public_ip, crucial for managing and automating your infrastructure.
bash
Copy code
Outputs:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;instance_id = "i-0bf954919ed765de1"&lt;br&gt;
instance_public_ip = "54.186.202.254"&lt;br&gt;
Conclusion&lt;br&gt;
Utilizing Terraform outputs enhances operational visibility and automation by providing essential resource details. These outputs are seamlessly integrable with other infrastructure components or subsequent Terraform projects.&lt;br&gt;
Cleanup (Optional)&lt;br&gt;
If not continuing to further tutorials, clean up your infrastructure:&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
$ terraform destroy&lt;br&gt;
Confirm destruction to optimize cost and security by removing unused resources.&lt;/p&gt;

&lt;p&gt;7)Store remote state&lt;br&gt;
Getting Started with Terraform and HCP Terraform&lt;br&gt;
Overview&lt;br&gt;
Terraform simplifies infrastructure management by treating it as code. This guide helps you set up Terraform to provision AWS resources and integrate with HashiCorp Cloud Platform (HCP) Terraform for centralized state management.&lt;br&gt;
Prerequisites&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Configuration Setup: Create a directory named learn-terraform-aws-instance and save the following in main.tf:
hcl
Copy code
terraform {
required_providers {
aws = {
  source  = "hashicorp/aws"
  version = "~&amp;gt; 4.16"
}
}
required_version = "&amp;gt;= 1.2.0"
}&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "app_server" {&lt;br&gt;
  ami           = "ami-08d70e59c07c61a3a"&lt;br&gt;
  instance_type = "t2.micro"&lt;br&gt;
}&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Apply: Initialize Terraform and apply your configuration:
bash
Copy code
$ terraform init
$ terraform apply
Setting up HCP Terraform&lt;/li&gt;
&lt;li&gt; Log in to HCP Terraform: Use the Terraform CLI to log in and authenticate with HCP Terraform:
bash
Copy code
$ terraform login&lt;/li&gt;
&lt;li&gt; Configure for HCP Terraform: Modify main.tf to integrate with HCP Terraform:
hcl
Copy code
terraform {
cloud {
organization = "organization-name"
workspaces {
  name = "learn-terraform-aws"
}
}&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
      source  = "hashicorp/aws"&lt;br&gt;
      version = "~&amp;gt; 4.16"&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Initialize and Migrate State: Re-initialize Terraform to migrate state to HCP Terraform:
bash
Copy code
$ terraform init
Confirm migration and delete the local state file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Applying Configuration and Managing Workspace&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Set Workspace Variables: Configure AWS credentials in HCP Terraform's workspace variables.&lt;/li&gt;
&lt;li&gt; Apply Configuration: Apply your Terraform configuration to ensure infrastructure consistency:
bash
Copy code
$ terraform apply
Destroying Infrastructure
Clean up resources using:
bash
Copy code
$ terraform destroy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;br&gt;
You've completed the essentials for Terraform and HCP Terraform integration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71w67xibdbyh3b24uvdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71w67xibdbyh3b24uvdf.png" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Frequently Asked Questions:-&lt;br&gt;
1.Do I need prior programming or infrastructure experience to follow the guide?&lt;br&gt;
No, prior programming or infrastructure experience is not necessary to follow the guide. It is designed to cater to beginners and assumes no prior knowledge of Terraform. The guide provides step-by-step explanations and examples to help newcomers understand and apply the concepts effectively.&lt;/p&gt;

&lt;p&gt;2.Are there any prerequisites for using Terraform?&lt;br&gt;
The guide may mention a few prerequisites, such as having a basic understanding of cloud computing concepts and having an account with a cloud provider (if you plan to provision resources in the cloud). Additionally, it may recommend installing Terraform and a text editor suitable for writing code.&lt;/p&gt;

&lt;p&gt;3.Does the guide provide hands-on examples and exercises?&lt;br&gt;
Yes, the Terraform Beginner's Guide typically includes hands-on examples and exercises throughout the content. These examples help solidify the concepts and allow readers to practice writing Terraform configurations, executing commands, and managing infrastructure resources.&lt;/p&gt;

&lt;p&gt;4.How does Infrastructure as Code handle infrastructure updates and changes?&lt;br&gt;
Infrastructure as Code tools typically handle updates and changes by comparing the desired state defined in the code with the current state of the infrastructure. When changes are made to the code, the tools generate an execution plan that outlines the modifications required to achieve the desired state. This plan can be reviewed and then applied to update or modify the infrastructure accordingly.&lt;/p&gt;

&lt;p&gt;5.Can I use Infrastructure as Code for existing infrastructure?&lt;br&gt;
Yes, Infrastructure as Code can be used for existing infrastructure. By defining the existing infrastructure in code, you can capture its current state and make modifications to it using code-based configuration files. This approach allows you to manage existing infrastructure in a consistent and automated manner.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>beginners</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
