<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Fife Oluwabunmi</title>
    <description>The latest articles on Forem by Fife Oluwabunmi (@thecolossus).</description>
    <link>https://forem.com/thecolossus</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thecolossus"/>
    <language>en</language>
    <item>
      <title>Deploying AI application on ec2 instance</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Fri, 28 Feb 2025 14:38:52 +0000</pubDate>
      <link>https://forem.com/thecolossus/deploying-ai-application-on-ec2-instance-4g68</link>
      <guid>https://forem.com/thecolossus/deploying-ai-application-on-ec2-instance-4g68</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The goal of this exercise was to deploy a Flask AI application on an ec2 instance. This documentation covers a step by step method to achieve this using two methods- Dockerized &amp;amp; Without Docker&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying without Docker
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Testing the application locally
&lt;/h4&gt;

&lt;p&gt;As a rule of thumb, before deploying an application to a cloud environment, it is important to always attempt to get it to run locally to ensure there are little or no bugs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Settting up Cloud environment
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a GitHub repository to store the code if this hasn't been done already&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the AWS console, spin up an ec2 instance with with reasonable processing power- t2.medium should suffice&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure that when setting up the Security Group, an ingress rule for port 5000 is added.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect to the instance using a preferred method. Inside of the instance, run commands &lt;code&gt;git -v&lt;/code&gt; to confirm the git command line tool is installed. If not, run either of the below commands to install it&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For Red-Hat based systems
sudo dnf install git-all

# For Debian based systems
sudo apt install git-all

# For MacOs, install using homebrew preferably
brew install git

# Verify installation
git -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Clone repository and install dependencies
&lt;/h4&gt;

&lt;p&gt;With the git cli tool installed, the repo can now be conveniently cloned&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone &amp;lt;github-repo-url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the Python &amp;amp; Pip&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Ensure Python is installed
python3 --version

# If not installed, run commands to install

# Debian based
sudo apt update
sudo apt upgrade
sudo apt install python3
sudo apt install python3-pip
sudo apt install python3-dev python3-venv build-essential

# Red-hat based
sudo dnf update
sudo dnf install python3
sudo dnf groupinstall "Development Tools"
sudo dnf install python3-pip

# Confirm installation
python3 --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Project dependencies&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Navigate to the Project directory
cd &amp;lt;project-name&amp;gt;

# Ensure the directory has requirements.txt file
ls

# Create Python virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run the application
python3 app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, navigate to the browser, put in the public IP of the instance and attach port 5000 to it ie. &lt;code&gt;22.222.22.111:5000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Your application is now running on ec2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonutme8ubhoczpciaqa1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonutme8ubhoczpciaqa1.png" alt="Application running on the browser" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying with Docker
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Have &lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; installed locally&lt;/li&gt;
&lt;li&gt;Have a Docker Hub &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;account&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the code base, a &lt;code&gt;Dockerfile&lt;/code&gt; has been created. This file is used to create the Docker Image alongside the &lt;code&gt;compose.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;To create the docker image locally and start the container, run:&lt;br&gt;
&lt;code&gt;docker compose up --build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command builds the image using &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify the container is running successfully by attempting to access the application on &lt;code&gt;127.0.0.1:5000&lt;/code&gt;. Once confirmed, stop the container.&lt;/p&gt;

&lt;p&gt;Next we need to tag the image so it can be pushed to Docker Hub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# First verify the image name
docker images

# Your output should be something like spamemail-flask-app

# Now tag the image
docker tag spamemail-flask-app &amp;lt;your-dockerhub-username&amp;gt;/spam-email:latest

# Eg
docker tag spamemail-flask-app fifss/spam-email:latest

# Login to dockerhub from your terminal and follow the instructions
docker login

# Finally, push your image to DockerHub
docker push docker push &amp;lt;dockerhub-username&amp;gt;/spam-email:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Pull Docker image on ec2 instance
&lt;/h4&gt;

&lt;p&gt;Create an instance on AWS. Make sure to use at least the t2.medium because of Docker system requirements.&lt;/p&gt;

&lt;p&gt;Also be sure to allow the port 5000 in the Security Group for the instance&lt;/p&gt;

&lt;p&gt;Login to the instance and install Docker. Refer to Docker &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After going through the installation processes, to enable Docker without root privileges, refer to this &lt;a href="https://docs.docker.com/engine/install/linux-postinstall/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now test your Docker installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that Docker is installed, pull the Docker image from DockerHub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;docker-username&amp;gt;/spam-email:latest

# Now that the image has been pulled, run the image as a container

docker run --name spam-email -p 5000:5000 &amp;lt;docker-username&amp;gt;/spam-email:latest

# Eg
docker run --name spam-email -p 5000:5000 fifss/spam-email
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the browser and paste in the public IP address with the port &lt;code&gt;5000&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example
http://10.222.133.155:5000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! Now your container is running successfully&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I Helped a Startup Automate Cloud Infrastructure in Minutes</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Thu, 20 Feb 2025 19:59:22 +0000</pubDate>
      <link>https://forem.com/thecolossus/how-i-helped-a-startup-automate-cloud-infrastructure-in-minutes-430p</link>
      <guid>https://forem.com/thecolossus/how-i-helped-a-startup-automate-cloud-infrastructure-in-minutes-430p</guid>
      <description>&lt;p&gt;A while ago, I was approached by a Co-Founder to help them push out a new feature and for this feature to work as expected, they needed cloud resources to be created on the fly- easily, quickly without any additional configuration and they needed it as part of a workflow. &lt;/p&gt;

&lt;p&gt;Now from that brief description, the obvious technology that can make this possible is Infrastructure as Code- but figuring that part out is the easy part. Ensuring that every time the Terraform scripts are run, the resources are created seamlessly was where the real work was!&lt;/p&gt;

&lt;p&gt;I'll be walking you through how I was able to achieve this for AWS &amp;amp; GCP ;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing AWS Cloud with Terraform
&lt;/h2&gt;

&lt;p&gt;The job was "simple". Write a Terraform script to automate the creation of AWS ec2 instance(s) with all the supporting resources ensuring it's &lt;strong&gt;secure&lt;/strong&gt; and &lt;strong&gt;accessible&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

# Create the VPC
resource "aws_vpc" "company_vpc" {
  cidr_block = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "company-vpc"
  }
}

# Create the Subnet
resource "aws_subnet" "company_subnet" {
  vpc_id            = aws_vpc.company_vpc.id
  cidr_block        = "10.0.1.0/24"
  map_public_ip_on_launch = true
  availability_zone = "us-east-1a"
  tags = {
    Name = "company-subnet"
  }
}

# Create the Internet Gateway
resource "aws_internet_gateway" "company_igw" {
  vpc_id = aws_vpc.company_vpc.id
  tags = {
    Name = "company-igw"
  }
}


# Create the Route Table
resource "aws_route_table" "company_route_table" {
  vpc_id = aws_vpc.company_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.company_igw.id
  }

  tags = {
    Name = "company-route-table"
  }
}

# Associate Route Table with Subnet
resource "aws_route_table_association" "company_subnet_assoc" {
  subnet_id      = aws_subnet.company_subnet.id
  route_table_id = aws_route_table.company_route_table.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're not concerned with setting up a vpc for the infrastructure, then ignore the jargon above XD&lt;/p&gt;

&lt;p&gt;Now for the interesting part, we will create the ec2 instance, Security Groups, EBS, and Key pair.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create Security Group
resource "aws_security_group" "company_sg" {
  vpc_id = aws_vpc.company_vpc.id
  tags = {
    Name = "company-sg"
  }
}

# Ingress Rules
resource "aws_vpc_security_group_ingress_rule" "ssh_ingress" {
  security_group_id = aws_security_group.company_sg.id
  from_port         = 22
  to_port           = 22
  ip_protocol       = "tcp"
  cidr_ipv4         = "0.0.0.0/0"
}

# If you have specifics for the in-bound rules, then specify them.
# Avoid this!
resource "aws_vpc_security_group_ingress_rule" "all_tcp_ingress" {
  security_group_id = aws_security_group.company_sg.id
  from_port         = 0
  to_port           = 65535
  ip_protocol       = "tcp"
  cidr_ipv4         = "0.0.0.0/0"
}

# Egress Rules
resource "aws_vpc_security_group_egress_rule" "all_egress" {
  security_group_id = aws_security_group.company_sg.id
  from_port         = 0
  to_port           = 0
  ip_protocol       = "-1"
  cidr_ipv4         = "0.0.0.0/0"
}

# Create EC2 Instances
resource "aws_instance" "company_instance" {
  count = var.instance_count

  # Change this to the ami &amp;amp; instance type you want to use
  ami           = "ami-0e2c8caa4b6378d8c"
  instance_type = "t3.medium"

  subnet_id                   = aws_subnet.company_subnet.id
  vpc_security_group_ids      = [aws_security_group.company_sg.id]
  associate_public_ip_address = true
  key_name = aws_key_pair.company-key.key_name

  # Root Volume (Default Storage)
  root_block_device {
    volume_size = 50  # Size in GB
    volume_type = "gp3"
  }

  # Additional EBS Volume
  ebs_block_device {
    device_name           = "/dev/xvdb"
    volume_size           = 100  # Size in GB
    volume_type           = "gp3"
    delete_on_termination = true
  }

  tags = {
    Name = "company-instance-${count.index + 1}"
  }
}

resource "tls_private_key" "pk" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "aws_key_pair" "company-key" {
  key_name   = "company-aws-key-pair"
  public_key = tls_private_key.pk.public_key_openssh
}

resource "local_file" "company_key" {
  content         = tls_private_key.pk.private_key_pem
  filename        = "./company-aws-key-pair.pem"
  file_permission = "0400"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  outputs.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  description = "ID of the VPC"
  value       = aws_vpc.company_vpc.id
}

output "subnet_id" {
  description = "ID of the subnet"
  value       = aws_subnet.company_subnet.id
}

output "security_group_id" {
  description = "ID of the security group"
  value       = aws_security_group.company_sg.id
}

output "instance_public_ips" {
  description = "Public IPs of the EC2 instances"
  value       = aws_instance.company_instance[*].public_ip
}

output "private_key_path" {
  description = "Path to the generated private key file"
  value       = local_file.company_key.filename
}

output "key_pair_name" {
  description = "Name of the AWS key pair"
  value       = aws_key_pair.company-key.key_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  variables.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instance_count" {
  description = "Number of AWS instances to launch"
  type        = number
  default     = 1
}

variable "region" {
  description = "AWS region to deploy resources"
  type        = string
  default     = "us-east-1"
}

variable "vpc_cidr_block" {
  description = "CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "subnet_cidr_block" {
  description = "CIDR block for the public subnet"
  type        = string
  default     = "10.0.1.0/24"
}

variable "availability_zone" {
  description = "Availability zone for the subnet"
  type        = string
  default     = "us-east-1a"
}

variable "ami" {
  description = "AMI ID for the EC2 instances"
  type        = string
  default     = "ami-0e2c8caa4b6378d8c"
}

variable "instance_type" {
  description = "Instance type for EC2 instances"
  type        = string
  default     = "t2.large"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things to take away from this, we have a secure ec2 instance setup with supporting resources like vpc, security group, and subnet created. We also have the variables file to dynamically handle various resource naming and a couple of other things.&lt;/p&gt;

&lt;p&gt;Thanks for reading ;D&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Scraping Custom Django Metrics with Prometheus</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Thu, 09 Jan 2025 11:08:28 +0000</pubDate>
      <link>https://forem.com/thecolossus/scraping-custom-django-metrics-with-prometheus-3mep</link>
      <guid>https://forem.com/thecolossus/scraping-custom-django-metrics-with-prometheus-3mep</guid>
      <description>&lt;p&gt;In the previous article, we explored setting up a Prometheus instance to scrape generic data(metrics) from our &lt;strong&gt;very&lt;/strong&gt; basic Django application. &lt;/p&gt;

&lt;p&gt;Now, we're taking it a step higher: We're going to send custom metrics to Prometheus so we can visualize the data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To follow along with this article, you need:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To have Prometheus installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To understand the basics of Prometheus. For that, refer to the previous entries in this &lt;a href="https://dev.to/thecolossus/series/28564"&gt;series&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A basic Django application set up for you to work with. You can refer to Django's official &lt;a href="https://docs.djangoproject.com/en/5.1/intro/tutorial01/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for this.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using the Python Logging module
&lt;/h2&gt;

&lt;p&gt;Python already has a module for &lt;a href="https://docs.python.org/3/library/logging.html#module-logging" rel="noopener noreferrer"&gt;logging&lt;/a&gt; various information in an application. We will build on this module to generate our custom logs, which will be exposed to Prometheus for scraping.&lt;/p&gt;

&lt;p&gt;In your &lt;code&gt;settings.py&lt;/code&gt; add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os # Add this at the top of the file
.
.
.

# Add this at the bottom of the file
LOGGING = {
    'version': 1,  # Specifies the logging configuration schema version
    'disable_existing_loggers': False,  # Keep default Django loggers active
    'formatters': {
        'verbose': {
            'format': '{levelname} {asctime} {module} {message}',
            'style': '{',  # Use `{}` to format log messages
        },
    },
    'handlers': {
        'file': {  # Write logs to a file
            'level': 'INFO',
            'class': 'logging.FileHandler',
            'filename': os.path.join(BASE_DIR, 'app.log'),  # Log file path
            'formatter': 'verbose',  # Use the verbose format
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'INFO',  # Log everything INFO and above
            'propagate': True,  # Pass log messages to parent loggers
        },
    },
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This defines a log handler that will write information into an &lt;code&gt;app.log&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;To read more about log levels, refer to this &lt;a href="https://docs.djangoproject.com/en/5.1/topics/logging/#loggers" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Custom Prometheus Log Handler
&lt;/h2&gt;

&lt;p&gt;In a new &lt;code&gt;customlogger.py&lt;/code&gt; file, add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
from prometheus_client import Counter

# Define Prometheus counters for different log levels
log_counters = {
    'INFO': Counter('django_log_info_total', 'Total number of INFO logs'),
    'WARNING': Counter('django_log_warning_total', 'Total number of WARNING logs'),
    'ERROR': Counter('django_log_error_total', 'Total number of ERROR logs'),
}

class PrometheusLogHandler(logging.Handler):  # Custom log handler
    def emit(self, record):
        log_level = record.levelname  # Get the log level (INFO, WARNING, ERROR)
        if log_level in log_counters:  # Check if we have a counter for this level
            log_counters[log_level].inc()  # Increment the counter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Expose Metrics to Prometheus
&lt;/h2&gt;

&lt;p&gt;We will create a Django endpoint(Django view) for Prometheus to access the data from.&lt;/p&gt;

&lt;p&gt;app/views.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from prometheus_client import generate_latest
from django.http import HttpResponse

def metrics_view(request):
    """Expose Prometheus metrics, including log counters."""
    metrics = generate_latest()  # Generate all Prometheus metrics
    return HttpResponse(metrics, content_type='text/plain')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;yourdjangoproject/urls.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.contrib import admin
from django.urls import path, include
from demo.views import metrics_view

urlpatterns = [
    path('admin/', admin.site.urls),
    path('', include('django_prometheus.urls')),
    path('metrics/', metrics_view, name='metrics'),
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;prometheus.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scrape_configs:
  - job_name: 'django_app'  # A name for this job
    static_configs:
      - targets: ['localhost:8000']  # Django app's address
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will configure Prometheus to look for data at the address &lt;code&gt;localhost:8000&lt;/code&gt;. Alternatively, you can use &lt;code&gt;127.0.0.1:8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, we update our &lt;code&gt;settings.py&lt;/code&gt; to include the Custom handler we  created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from demo.customlogger import PrometheusLogHandler
.
.
LOGGING = {
    'version': 1,  # Specifies the logging configuration schema version
    'disable_existing_loggers': False,  # Keep default Django loggers active
    'formatters': {
        'verbose': {
            'format': '{levelname} {asctime} {module} {message}',
            'style': '{',  # Use `{}` to format log messages
        },
    },
    'handlers': {
        'file': {  # Write logs to a file
            'level': 'INFO',
            'class': 'logging.FileHandler',
            'filename': os.path.join(BASE_DIR, 'app.log'),  # Log file path
            'formatter': 'verbose',  # Use the verbose format
        },
        'prometheus': {  # Send logs to Prometheus (custom handler)
            'level': 'INFO',
            'class': 'my_app.log_handlers.PrometheusLogHandler',  # Our custom handler
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file', 'prometheus'],  # Use both file and Prometheus handlers
            'level': 'INFO',  # Log everything INFO and above
            'propagate': True,  # Pass log messages to parent loggers
        },
    },
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Putting it all together
&lt;/h3&gt;

&lt;p&gt;Now we test what we have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py runserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the &lt;code&gt;Prometheus&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./prometheus --config.file=prometheus.yml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On your browser, open &lt;code&gt;127.0.0.1:8000/metrics&lt;/code&gt; to confirm that the metrics are being generated.&lt;/p&gt;

&lt;p&gt;Navigate to the Prometheus UI at &lt;code&gt;127.0.0.1:9090&lt;/code&gt;. Query the log metrics by searching for &lt;code&gt;django_log_info_total&lt;/code&gt; and click on the Graph view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6smfr196mj2znjnlqg38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6smfr196mj2znjnlqg38.png" alt="Prometheus UI" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What we've done so far
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Enabled logging in our Django application using the &lt;code&gt;logging&lt;/code&gt; module&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Created a Custom logging handler to increment the Prometheus counter whenever a log is recorded&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exposed the Django logs to Prometheus and visualized it in Prometheus UI&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;For the full implementation, you can check out my &lt;a href="https://github.com/oluwabunmifife/logger-demo" rel="noopener noreferrer"&gt;repository&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>devops</category>
      <category>prometheus</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Monitoring Django WebApps with Prometheus</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Tue, 27 Aug 2024 12:07:51 +0000</pubDate>
      <link>https://forem.com/thecolossus/monitoring-django-webapps-with-prometheus-2kc2</link>
      <guid>https://forem.com/thecolossus/monitoring-django-webapps-with-prometheus-2kc2</guid>
      <description>&lt;p&gt;In the previous article, we looked at how to setup Prometheus and we got a feel of what it looks like to monitor a service. In this one, we'll be going straight into monitoring applications- Django Web apps, so if you're trying to figure out how to up your observability game, this article is for you!&lt;/p&gt;

&lt;p&gt;Let's get into it!&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisite
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Prometheus installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of how Prometheus works (Check my previous &lt;a href="https://dev.to/thecolossus/introduction-to-prometheus-monitoring-23mj"&gt;article&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A functioning Django application you want to monitor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Something you should keep in mind: Prometheus monitors applications through &lt;code&gt;client libraries&lt;/code&gt;. Read the &lt;a href="https://prometheus.io/docs/instrumenting/clientlibs/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article, we'll be using &lt;code&gt;django-prometheus&lt;/code&gt; to export the metrics of our Django App to Prometheus!&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing and setting up django-prometheus
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install django-prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your &lt;code&gt;settings.py&lt;/code&gt;, add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTALLED_APPS = [
   ...
   'django_prometheus',
   ...
]

MIDDLEWARE = [
    'django_prometheus.middleware.PrometheusBeforeMiddleware',
    .
    .
    .
    'django_prometheus.middleware.PrometheusAfterMiddleware',
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;code&gt;urls.py&lt;/code&gt; of your Django project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;urlpatterns = [
    ...
    path('', include('django_prometheus.urls')),
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;This should be added in the &lt;code&gt;urls.py&lt;/code&gt; of your Django project and NOT the Django app. Please review this &lt;a href="https://www.makeuseof.com/difference-between-app-and-project-in-django/" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For more details on the &lt;code&gt;django-prometheus&lt;/code&gt; package, read &lt;a href="https://pypi.org/project/django-prometheus/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be sure to update your requirements.txt file&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip freeze &amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, update your &lt;code&gt;prometheus.yml&lt;/code&gt; file to look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval:     5s
  evaluation_interval: 5s

alerting:
  alertmanagers:

scrape_configs:
  - job_name: 'django-app'
    static_configs:
      - targets: ['127.0.0.1:8000']
        labels:
          group: 'server'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: If this section is unclear, refer to my previous &lt;a href="https://dev.to/thecolossus/introduction-to-prometheus-monitoring-23mj"&gt;article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start your webserver and open &lt;code&gt;127.0.0.1:8000/metrics&lt;/code&gt;. You should have output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi22pt92h60gcy8g9sgkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi22pt92h60gcy8g9sgkr.png" alt="Django app metrics" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be able to access the Prometheus UI, start up your Prometheus server with this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./prometheus --config.file=prometheus.yml
# Make sure you're in the prometheus-2.54.0.darwin-amd64 dir
# The name will vary depending on your OS/distribution ;)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now run queries in the Prometheus UI at &lt;code&gt;http://127.0.0.1:9090/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the next one, we'll look at scraping custom metrics from our applications!&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>django</category>
      <category>prometheus</category>
      <category>devops</category>
    </item>
    <item>
      <title>Introduction to Prometheus Monitoring</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Fri, 23 Aug 2024 13:18:37 +0000</pubDate>
      <link>https://forem.com/thecolossus/introduction-to-prometheus-monitoring-23mj</link>
      <guid>https://forem.com/thecolossus/introduction-to-prometheus-monitoring-23mj</guid>
      <description>&lt;p&gt;A lot of newbies and "semi-newbies" in DevOps and SWE dodge and run from Monitoring and Prometheus because it seems really difficult to crack or they feel like skills that you'd only need at a senior level... &lt;/p&gt;

&lt;p&gt;Guess what? I was once there but after psyching myself up for almost one week, I finally started working with Prometheus and I'm here to make learning about Monitoring and Prometheus a whooollee lot simpler. So relax as we dive into Prometheus' World! :D&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus
&lt;/h2&gt;

&lt;p&gt;In a nutshell, &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; is an open-source tool that allows you to keep tabs on different types of resources including applications running either locally or on some server, and even servers themselves. &lt;br&gt;
With Prometheus, you keep track of various data (metrics) related to your app or server.&lt;/p&gt;

&lt;p&gt;It's also pretty easy to install and set up- just enough to get you started with it, so let's get right into it!&lt;/p&gt;
&lt;h3&gt;
  
  
  Installing Prometheus
&lt;/h3&gt;

&lt;p&gt;Prometheus is broken into different components and you can download the binary files of these components individually- this way, you only have to get what you need!&lt;/p&gt;

&lt;p&gt;If you head over to &lt;a href="https://prometheus.io/download/" rel="noopener noreferrer"&gt;Prometheus download page&lt;/a&gt;, you'll be able to find officially maintained components. This article won't cover them all, we'd only be looking at &lt;code&gt;prometheus&lt;/code&gt; itself and &lt;code&gt;node_exporter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Choose your binary file according to your distribution and download it. I'm using a Mac, so I'll select the &lt;code&gt;darwin&lt;/code&gt; distro.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfg7is6d2zhbvjaw0ilf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfg7is6d2zhbvjaw0ilf.png" alt="prometheus binary files" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next thing to do is to unzip the file. If you're on a linux/mac machine, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar xvfz prometheus-2.54.0.darwin-amd64.tar.gz
cd prometheus-2.54.0.darwin-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should be in the &lt;code&gt;prometheus&lt;/code&gt; directory. You can run &lt;code&gt;ls&lt;/code&gt; and you should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8azyjjqygdghbie7s7c5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8azyjjqygdghbie7s7c5.png" alt="prometheus directory" width="800" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we're going to edit the &lt;code&gt;prometheus.yml&lt;/code&gt; file. Get rid of the default configurations and make sure your file looks like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval:     15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're asking Prometheus to monitor &lt;code&gt;localhost:9090&lt;/code&gt; which is the port prometheus runs on. We're doing this just so you can get a feel of how Prometheus works. We'll set up more exciting stuff soon.&lt;/p&gt;

&lt;p&gt;Now we're going to see Prometheus in action! Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./prometheus --config.file=prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure you're in the &lt;code&gt;prometheus&lt;/code&gt; directory before running the command. We're calling Prometheus and asking it to use the file we just configured as the config file.&lt;/p&gt;

&lt;p&gt;Once it starts running, we can go over to &lt;code&gt;localhost:9090&lt;/code&gt; to view the UI Prometheus provides for us. We'll go into a bit more detail later but for now, just know that this is where you can run different queries to get information about the service you're running!&lt;/p&gt;

&lt;p&gt;Note: If you open &lt;code&gt;localhost:9090/metrics&lt;/code&gt;, you'll see the raw format of our service's metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8fzhagj09sf6kq0andt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8fzhagj09sf6kq0andt.png" alt="Prometheus raw metrics" width="800" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for now. In the next one, we'll see how we can use Prometheus and Grafana to monitor a Django application!&lt;/p&gt;

&lt;p&gt;I hope you found this guide helpful.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>monitoring</category>
      <category>linux</category>
    </item>
    <item>
      <title>Terraform and s3 buckets?!</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Mon, 12 Aug 2024 14:04:08 +0000</pubDate>
      <link>https://forem.com/thecolossus/terraform-and-s3-buckets-1250</link>
      <guid>https://forem.com/thecolossus/terraform-and-s3-buckets-1250</guid>
      <description>&lt;p&gt;Servers, vpcs, nics. Managing these resources in either a development or production environment can get tedious, repetitive, and exhausting.&lt;/p&gt;

&lt;p&gt;Terraform is an Infrastructure-as-code tool used to manage, specify, and control various resources on various environments- cloud and whathaveyou.&lt;/p&gt;

&lt;p&gt;Very briefly, we'll look at how to create and manage AWS s3 buckets all using &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;NB: If you're not already familiar with AWS and s3 buckets, I advise getting some foundational knowledge on those. It's important to know how something works before considering automating it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want me to put something together as regards that, let me know in the comments&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With that out of the way, let's get to it!&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a standard s3 bucket
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1" # Specify your region

}

# Create s3 bucket
resource "aws_s3_bucket" "testbucket" {
  bucket = "my-new-bucket"

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details on creating s3 buckets check the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Creating an s3 bucket is one thing. Making it available to the public (if necessary) is another thing. The rest of this article will be helpful to people trying to make their bucket or the objects in it public.&lt;/p&gt;

&lt;h3&gt;
  
  
  Making your bucket accessible
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Bucket ownership controls
resource "aws_s3_bucket_ownership_controls" "buck-owner" {
  bucket = aws_s3_bucket.testbucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

# Disable bucket default security
resource "aws_s3_bucket_public_access_block" "public-block" {
  bucket = aws_s3_bucket.testbucket.id

  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}

resource "aws_s3_bucket_acl" "buk-acl" {
  depends_on = [
    aws_s3_bucket_ownership_controls.buck-owner,
    aws_s3_bucket_public_access_block.public-block,
  ]

  bucket = aws_s3_bucket.testbucket.id
  acl    = "public-read"
}

# Enable read access
resource "aws_s3_bucket_policy" "allow-public-access" {
  bucket = aws_s3_bucket.testbucket.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Principal = "*"
        Action = [
          "s3:GetObject",
          "s3:PutObject"
        ]
        Resource = [
          "${aws_s3_bucket.store-ket.arn}/*"
        ]
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each stage of this section is highly important. To learn more about these configurations, refer to some of these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_ownership_controls" rel="noopener noreferrer"&gt;Bucket Ownership Control&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_acl#with-public-read-acl" rel="noopener noreferrer"&gt;s3 bucket access control lists&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_policy" rel="noopener noreferrer"&gt;s3 bucket policy&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some additional configurations you can add to your s3 bucket include versioning &amp;amp; server-side encryption&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_versioning" "enable-versioning" {
  bucket = aws_s3_bucket.testbucket.id
  versioning_configuration {
    status = "Enabled"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To read more: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_versioning" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Server-side encryption
&lt;/h3&gt;

&lt;p&gt;There are different methods to enable server-side encryption but for simplicity's sake, we'll stick to one&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_server_side_encryption_configuration" "encrypt-ket" {
  bucket = aws_s3_bucket.testbucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "AES256"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've come to the end of this short tutorial. But as a bonus, in case you're wondering if you can upload files to the bucket directly from your code. Yes, you can!&lt;/p&gt;

&lt;h3&gt;
  
  
  Upload HTML file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_object" "upload" {
  key                    = "index.html"
  bucket                 = aws_s3_bucket.testbucket.id
  source                 = "buk-list/index.html"
  acl                    = "public-read"
  server_side_encryption = "AES256"
  content_type           = "text/html"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure the &lt;code&gt;source&lt;/code&gt; points to the location of the file you want to upload. I advise creating a folder in your project directory so the source looks like this &lt;code&gt;./folder/index.html&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thanks for joining me on this one. I hope you found this helpful. Leave your questions for me in the comments and I'll be sure to help out however I can!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Web-server X Load Balancers</title>
      <dc:creator>Fife Oluwabunmi</dc:creator>
      <pubDate>Sat, 27 Jul 2024 02:47:18 +0000</pubDate>
      <link>https://forem.com/thecolossus/web-server-x-load-balancers-1p1m</link>
      <guid>https://forem.com/thecolossus/web-server-x-load-balancers-1p1m</guid>
      <description>&lt;p&gt;Load balancers are crucial in mission-critical environments where multiple customers need to access data/resources across various regions.&lt;/p&gt;

&lt;p&gt;I set up a CI/CD pipeline with GitHub Actions that deployed a containerized application on multiple servers. The load balancer using Caddy was set up to distribute the traffic between the servers.&lt;/p&gt;

&lt;p&gt;Receiving this task, it seemed a little overwhelming but I was able to break it down into different sections using a method I learned -&amp;gt; &lt;code&gt;DevSecOps&lt;/code&gt;. The idea is to have the project broken into manageable bits. I'll be using this method to walk you through my project!&lt;/p&gt;

&lt;p&gt;Breaking down the task into bits helped me to focus on one section at a time and ensure that I covered all the &lt;em&gt;‘coverables’&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwjr1ew41mahlkqxq4k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwjr1ew41mahlkqxq4k5.png" alt="Web Server Architecture" width="800" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join me as we go into detail on this exciting project!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Dev&lt;/strong&gt;secops
&lt;/h2&gt;

&lt;p&gt;Before any process can be automated, there must be some assurance that things and services work as they should. I started by manually containerizing my application and ensuring it ran on the server. &lt;/p&gt;

&lt;p&gt;My application is a Django app, so this part will differ depending on the peculiarities of you stack and application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python3.9", "manage.py", "runserver", "0.0.0.0:8000"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although the details of the &lt;code&gt;Dockerfile&lt;/code&gt; are outside the scope of this article, you can refer to Docker's documentation. I also found this article quite useful: &lt;a href="https://www.untangled.dev/2020/06/06/docker-django-local-dev" rel="noopener noreferrer"&gt;Docker Django Deployment&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;My Web application also has a DB service, so I used Docker Compose to manage both containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  compose.yaml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  db:
    image: nouchka/sqlite3:latest
    volumes:
      - ./data/db:/root/db
    environment:
      - SQLITE3_DB=db.sqlite3
  web:
    build: .
    command: python3.9 manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To learn more about Docker Compose, check the official documentation. You can also refer to this article. &lt;em&gt;Again, this is not applicable to other types of applications.&lt;/em&gt; &lt;a href="https://sgino209.medium.com/django-sqlite-docker-in-local-production-d082a7044af1" rel="noopener noreferrer"&gt;Link to Article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After testing the dockerization, the next thing to work on was the Load Balancer. There are a number of tools to choose from, but I chose to work with &lt;a href="https://caddyserver.com/" rel="noopener noreferrer"&gt;Caddy&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Caddyfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:80 {
  reverse_proxy [11.11.11.11:8000 22.22.22.22:8000] {  # Ip addresses go here :)
    lb_policy random
}
}

# File is stored at /etc/caddy/Caddyfile by default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;random&lt;/code&gt; is the Load-balancing algorithm I decided to work with ;). Please refer to &lt;a href="https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#load-balancing" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt; for more details. That concludes the &lt;code&gt;Dev&lt;/code&gt; part of this project.&lt;/p&gt;

&lt;h2&gt;
  
  
  devSecops
&lt;/h2&gt;

&lt;p&gt;Aside from the fact that my servers (EC2 instances) had Security Groups, I decided to add an extra layer of security by setting up a &lt;a href="https://ubuntu.com/server/docs/firewalls" rel="noopener noreferrer"&gt;Firewall&lt;/a&gt; for the servers.&lt;/p&gt;

&lt;p&gt;There wasn't really much to it, I just decided the ports I wanted open on the OS level. &lt;/p&gt;

&lt;h2&gt;
  
  
  devsecOps
&lt;/h2&gt;

&lt;p&gt;The final part of this project was to include as much automation as possible to streamline deployment. The best/easiest choice was to use GitHub actions.&lt;/p&gt;

&lt;p&gt;I set up the pipeline to be triggered when the codebase changed. The workflow rebuilt the application image, pushed to Docker Hub, pulled the image on my servers and started the containers.&lt;/p&gt;

&lt;p&gt;After a lot of work on this part, I got something that worked for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  .github/workflows/main.yaml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build to servers

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        server: [11.11.11.11, 22.22.22.22]
    env:
      EC2_SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_PRIVATE_KEY }}
      EC2_USERNAME: ${{ secrets.EC2_USERNAME }}
      DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
      DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}


    steps:
      - name: Checkout source
        uses: actions/checkout@v3

      - name: Login to Docker Hub
        run: docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}

      - name: Build Docker Image
        run: |
          docker compose up --build -d
          docker compose down

      - name: Tag the docker image
        run: docker tag mockstack-overflow-web ${{ secrets.DOCKER_USERNAME }}/mockstack-overflow-web:latest

      - name: Publish image to docker hub
        run: docker push fifss/mockstack-overflow-web:latest

      - name: Login to servers
        uses: omarhosny206/setup-ssh-for-ec2@v1.0.0
        with:
            EC2_SSH_PRIVATE_KEY: $EC2_SSH_PRIVATE_KEY
            EC2_URL: ${{ matrix.server }}

      - name: Run docker commands on server 1 &amp;amp; 2
        run: |
          ssh -o StrictHostKeyChecking=no $EC2_USERNAME@${{ matrix.server }} &amp;lt;&amp;lt; EOF
            docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
            docker pull $DOCKER_USERNAME/mockstack-overflow-web:latest
            docker stop mockstack-overflow-web || true
            docker rm mockstack-overflow-web || true
            docker run -d --name mockstack-overflow-web -p 8000:8000 $DOCKER_USERNAME/mockstack-overflow-web:latest
          EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you have your secrets stored on GitHub. To do this, in your &lt;strong&gt;GitHub repository&lt;/strong&gt;, go to &lt;code&gt;Settings -&amp;gt; Secrets &amp;amp; variables -&amp;gt; Actions -&amp;gt; New Repository secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For me, my &lt;code&gt;EC2_SSH_PRIVATE_KEY&lt;/code&gt; was the private key for my servers that I downloaded while setting up the server.&lt;/p&gt;

&lt;p&gt;I hope you've been able to learn a thing or two 😊&lt;/p&gt;

&lt;p&gt;Feel free to leave any questions you have for me in the comments. &lt;/p&gt;

&lt;p&gt;My name is Fife, let's connect and work together 🤝🏾&lt;/p&gt;

&lt;p&gt;Btw, this is my debut in this community 😅 not too shabby huh?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reference: &lt;a href="https://www.cleanpng.com/png-devops-business-process-software-development-proce-6814831/" rel="noopener noreferrer"&gt;Cover Image&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>githubactions</category>
      <category>django</category>
    </item>
  </channel>
</rss>
