<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bayo Ogundele</title>
    <description>The latest articles on Forem by Bayo Ogundele (@bayo_ogundele_b3f16b3c436).</description>
    <link>https://forem.com/bayo_ogundele_b3f16b3c436</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bayo_ogundele_b3f16b3c436"/>
    <language>en</language>
    <item>
      <title>Building a Production-Grade DevOps Homelab on 4GB Ram HP Stream PC</title>
      <dc:creator>Bayo Ogundele</dc:creator>
      <pubDate>Fri, 20 Feb 2026 18:38:43 +0000</pubDate>
      <link>https://forem.com/bayo_ogundele_b3f16b3c436/building-a-production-grade-devops-homelab-on-4gb-ram-hp-stream-pc-1k2h</link>
      <guid>https://forem.com/bayo_ogundele_b3f16b3c436/building-a-production-grade-devops-homelab-on-4gb-ram-hp-stream-pc-1k2h</guid>
      <description>&lt;p&gt;After months of procrastination, I finally started building my home lab. Day 1 was all about establishing the foundation: organizing the project structure, setting up Docker Compose, and deploying a complete monitoring stack with Prometheus, Node Exporter, and Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware Reality Check
&lt;/h2&gt;

&lt;p&gt;I started with an HP Stream (4GB RAM), but quickly realized it wasn't going to cut it. Container operations were painfully slow—startup times exceeded 2-3 minutes, and the system was constantly swapping memory. After struggling for a bit, I switched to my main PC with 8GB RAM and a stronger processor. The difference was immediate and dramatic. What took minutes now took seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson learned:&lt;/strong&gt; Hardware constraints directly impact your development velocity. Don't underestimate the importance of adequate resources when building infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Day 1 Project Structure
&lt;/h2&gt;

&lt;p&gt;Before jumping into Docker, I organized the project directory to ensure scalability and maintainability. Here's what I created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flev4o9emc3qj6urx1u7l.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flev4o9emc3qj6urx1u7l.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My organized homelab directory structure - separation of concerns from day one&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The directory structure separates concerns into distinct layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;monitoring/&lt;/strong&gt; - Prometheus and Grafana configurations with provisioning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cicd/&lt;/strong&gt; - Git server (Gitea) and CI/CD automation (Drone) for future deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;apps/&lt;/strong&gt; - Application deployments including the sample-app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docs/&lt;/strong&gt; - Documentation and screenshots&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scripts/&lt;/strong&gt; - Utility scripts for setup and maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure makes it easy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add new services without cluttering the root directory&lt;/li&gt;
&lt;li&gt;Version control configurations separately from code&lt;/li&gt;
&lt;li&gt;Scale the project as it grows&lt;/li&gt;
&lt;li&gt;Onboard new team members quickly&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Understanding the Monitoring Stack
&lt;/h2&gt;

&lt;p&gt;My Day 1 setup consists of three core components working together:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus&lt;/strong&gt; is the time-series database that collects metrics. It scrapes endpoints at regular intervals (I set mine to 15 seconds) and stores the data. Think of it as the "data collector" of the stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node Exporter&lt;/strong&gt; is a lightweight agent that runs on the system and exposes hardware and OS metrics—CPU usage, memory consumption, disk I/O, network traffic, and more. It exposes these metrics on port 9100 in a format that Prometheus understands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; is the visualization layer. It connects to Prometheus as a data source and allows you to create beautiful dashboards, set up alerts, and explore metrics interactively. It's the "pretty face" of your monitoring system.&lt;/p&gt;

&lt;p&gt;Together, they form a complete monitoring pipeline: Node Exporter collects metrics → Prometheus stores them → Grafana visualizes them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Docker Compose Architecture
&lt;/h2&gt;

&lt;p&gt;I used Docker Compose to orchestrate all three services. Here's what each service does:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus Service:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the official Prometheus image&lt;/li&gt;
&lt;li&gt;Exposes port 9090 for the web UI&lt;/li&gt;
&lt;li&gt;Mounts the prometheus.yml configuration file to define scrape targets&lt;/li&gt;
&lt;li&gt;Uses a named volume to persist time-series data across container restarts&lt;/li&gt;
&lt;li&gt;Joins a custom "monitoring" network for service-to-service communication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Node Exporter Service:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the official Node Exporter image&lt;/li&gt;
&lt;li&gt;Exposes port 9100 where metrics are available&lt;/li&gt;
&lt;li&gt;Mounts the host's /proc and /sys directories to access system metrics&lt;/li&gt;
&lt;li&gt;Also joins the monitoring network so Prometheus can scrape it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Grafana Service:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the official Grafana image&lt;/li&gt;
&lt;li&gt;Exposes port 3000 for the web UI&lt;/li&gt;
&lt;li&gt;Uses environment variables to set the admin password&lt;/li&gt;
&lt;li&gt;Mounts a volume for persistent dashboard and configuration data&lt;/li&gt;
&lt;li&gt;Depends on Prometheus being available before it starts&lt;/li&gt;
&lt;li&gt;Joins the monitoring network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The services communicate through a custom Docker bridge network called "monitoring." This means Prometheus can reach Node Exporter at &lt;code&gt;http://node-exporter:9100&lt;/code&gt; and Grafana can reach Prometheus at &lt;code&gt;http://prometheus:9090&lt;/code&gt;—all without exposing these services to the host network unless necessary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yrqiarqranto3c02yyi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yrqiarqranto3c02yyi.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All 7 containers running smoothly - monitoring, Git server, and CI/CD infrastructure&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Prometheus Configuration
&lt;/h2&gt;

&lt;p&gt;The prometheus.yml file defines what metrics to collect and from where. I configured two scrape jobs:&lt;/p&gt;

&lt;p&gt;The first job targets Prometheus itself on port 9090, allowing it to collect its own internal metrics. This is useful for monitoring the health of Prometheus itself.&lt;/p&gt;

&lt;p&gt;The second job targets Node Exporter on port 9100. This is where all the system metrics come from—CPU, memory, disk, network, and more. Prometheus scrapes this endpoint every 15 seconds and stores the data.&lt;/p&gt;

&lt;p&gt;The global configuration sets the scrape interval (how often to collect metrics) and evaluation interval (how often to evaluate alert rules). I kept both at 15 seconds for a good balance between data granularity and resource usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagga4282pisr7qq91h67.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagga4282pisr7qq91h67.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Prometheus successfully scraping metrics from both itself and Node Exporter - all targets UP and healthy&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up Grafana
&lt;/h2&gt;

&lt;p&gt;After the containers started, I accessed Grafana on port 3000 and performed the initial setup:&lt;/p&gt;

&lt;p&gt;First, I added Prometheus as a data source. The key here is using the service name from Docker Compose—&lt;code&gt;http://prometheus:9090&lt;/code&gt;—rather than localhost. This works because all services are on the same Docker network.&lt;/p&gt;

&lt;p&gt;Next, I created my first dashboard. I added panels to visualize:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU usage over time (using the &lt;code&gt;node_cpu_seconds_total&lt;/code&gt; metric)&lt;/li&gt;
&lt;li&gt;Available memory (using &lt;code&gt;node_memory_MemAvailable_bytes&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Disk I/O operations (using &lt;code&gt;node_disk_io_time_seconds_total&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Network traffic (using &lt;code&gt;node_network_transmit_bytes_total&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I set the dashboard to auto-refresh every 30 seconds so I could see real-time updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rgkk4b7514oczbiml7o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rgkk4b7514oczbiml7o.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Real-time system monitoring with Grafana - CPU at 21%, RAM at 68.8%, and 6.9 days uptime. Beautiful visualization of all critical metrics.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Stack Matters
&lt;/h2&gt;

&lt;p&gt;This monitoring foundation is crucial because:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility&lt;/strong&gt; - You can't optimize what you don't measure. With Prometheus and Grafana, I have complete visibility into system performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alerting&lt;/strong&gt; - Grafana allows me to set up alerts that trigger when metrics cross thresholds. This is essential for production systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; - This stack is designed to grow. As I add more services to my home lab, I can add new scrape targets to Prometheus without changing the core setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning&lt;/strong&gt; - Building this on Day 1 teaches fundamental DevOps concepts: containerization, networking, time-series databases, and visualization.&lt;/p&gt;




&lt;h2&gt;
  
  
  Current System Status
&lt;/h2&gt;

&lt;p&gt;Looking at my dashboard, I can see:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Usage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RAM: 5.5GB / 7GB used (68.8%)&lt;/li&gt;
&lt;li&gt;SWAP: 2.3GB / 16GB used (14.6%)&lt;/li&gt;
&lt;li&gt;CPU: 4 cores averaging 21.3% load&lt;/li&gt;
&lt;li&gt;Root FS: 233 GiB total, 81.8% utilized&lt;/li&gt;
&lt;li&gt;Uptime: 6.9 days&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Running Services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grafana (visualization)&lt;/li&gt;
&lt;li&gt;Prometheus (metrics database)&lt;/li&gt;
&lt;li&gt;Node Exporter (system metrics)&lt;/li&gt;
&lt;li&gt;Gitea (self-hosted Git server)&lt;/li&gt;
&lt;li&gt;Gitea-DB (PostgreSQL database)&lt;/li&gt;
&lt;li&gt;Drone Server (CI/CD orchestrator)&lt;/li&gt;
&lt;li&gt;Drone Runner (build executor)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All containers healthy and running smoothly! ✅&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways from Day 1
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hardware matters&lt;/strong&gt; - The jump from 4GB to 8GB made a massive difference&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organization from the start&lt;/strong&gt; - A clean directory structure saves headaches later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose is powerful&lt;/strong&gt; - Orchestrating multiple services with a single YAML file is incredibly efficient&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring from Day 1&lt;/strong&gt; - Having visibility into metrics from the beginning helps identify bottlenecks early&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker networking&lt;/strong&gt; - Custom networks simplify service communication and keep things clean&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Day 1 is just the foundation. My immediate next steps include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 2 - Git Server &amp;amp; CI/CD:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure Gitea webhooks for automated builds&lt;/li&gt;
&lt;li&gt;Create complete CI/CD pipelines with Drone&lt;/li&gt;
&lt;li&gt;Deploy sample applications through automated pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 1 Goals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-tuning Prometheus scrape intervals and retention policies&lt;/li&gt;
&lt;li&gt;Creating more sophisticated Grafana dashboards with custom queries&lt;/li&gt;
&lt;li&gt;Setting up alerting rules and notification channels&lt;/li&gt;
&lt;li&gt;Adding more exporters (PostgreSQL exporter, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Future Plans:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implementing Loki for log aggregation&lt;/li&gt;
&lt;li&gt;Moving to Kubernetes on Oracle Cloud Free Tier&lt;/li&gt;
&lt;li&gt;Infrastructure as Code with Terraform and Ansible&lt;/li&gt;
&lt;li&gt;Service mesh implementation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Start with Adequate Hardware
&lt;/h3&gt;

&lt;p&gt;Don't try to run production-grade infrastructure on inadequate hardware. The difference between 4GB and 8GB RAM wasn't just performance—it was the difference between frustration and productivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Monitoring First, Applications Second
&lt;/h3&gt;

&lt;p&gt;By establishing monitoring before deploying applications, I have baseline metrics and can see exactly how each new service impacts system resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Docker Compose for Local Development
&lt;/h3&gt;

&lt;p&gt;Docker Compose made it trivial to spin up complex multi-container applications. What would have taken hours to configure manually took minutes with docker-compose.yml.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Document Everything
&lt;/h3&gt;

&lt;p&gt;I documented every step in my README and took screenshots throughout. Future me will thank present me when I need to recreate this or explain it in interviews.&lt;/p&gt;




&lt;h2&gt;
  
  
  For Others Starting Out
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;If you want to build a similar homelab:&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  You Don't Need Expensive Hardware
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;8GB RAM is sufficient for a robust local setup&lt;/li&gt;
&lt;li&gt;Can start with just monitoring and add services incrementally&lt;/li&gt;
&lt;li&gt;Cloud free tiers (Oracle, AWS, GCP) available when you outgrow local resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Start with Monitoring
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Don't jump straight to complex applications&lt;/li&gt;
&lt;li&gt;Build observability first&lt;/li&gt;
&lt;li&gt;You'll thank yourself when things break&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker Compose is Your Friend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Learn it early, use it often&lt;/li&gt;
&lt;li&gt;Makes complex setups simple&lt;/li&gt;
&lt;li&gt;Infrastructure as code from day one&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  One Service at a Time
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Don't try to deploy everything at once&lt;/li&gt;
&lt;li&gt;Get one thing working perfectly&lt;/li&gt;
&lt;li&gt;Then add the next piece&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a home lab teaches you so much about infrastructure, containerization, and monitoring. Day 1 might seem simple—just three services in Docker Compose—but it's the foundation for everything that comes next.&lt;/p&gt;

&lt;p&gt;The key takeaway: &lt;strong&gt;start simple, measure everything, and iterate&lt;/strong&gt;. With the right monitoring in place from Day 1, you'll have the visibility needed to make informed decisions about what to build next.&lt;/p&gt;

&lt;p&gt;Looking at my Grafana dashboard now, seeing all those metrics flowing in real-time, I know this foundation will serve me well as I continue building out the homelab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tomorrow:&lt;/strong&gt; Adding Git server webhooks and creating my first automated CI/CD pipeline!&lt;/p&gt;




</description>
      <category>devops</category>
      <category>homelab</category>
      <category>developer</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying a Containerized E-commerce Application on AWS EC2: A Practical DevOps Walkthrough</title>
      <dc:creator>Bayo Ogundele</dc:creator>
      <pubDate>Tue, 10 Feb 2026 18:02:34 +0000</pubDate>
      <link>https://forem.com/bayo_ogundele_b3f16b3c436/deploying-a-containerized-e-commerce-application-on-aws-ec2-a-practical-devops-walkthrough-5b10</link>
      <guid>https://forem.com/bayo_ogundele_b3f16b3c436/deploying-a-containerized-e-commerce-application-on-aws-ec2-a-practical-devops-walkthrough-5b10</guid>
      <description>&lt;p&gt;This project started with a simple goal: deploy an e-commerce application the way real systems are deployed — not locally, not manually, and not magically.&lt;/p&gt;

&lt;p&gt;I wanted to understand the &lt;em&gt;entire flow&lt;/em&gt;: code → version control → containerization → registry → cloud server → running services → observability. So I built the system end to end and deployed it on AWS EC2 using Docker and Docker Compose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository:&lt;/strong&gt; &lt;a href="https://github.com/BayoJohn/Project-2.git" rel="noopener noreferrer"&gt;https://github.com/BayoJohn/Project-2.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Problem I Wanted to Solve
&lt;/h2&gt;

&lt;p&gt;A lot of projects stop at “it works on my machine.” I wanted to go further and deal with the questions that appear the moment you leave localhost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How does the application get packaged?&lt;/li&gt;
&lt;li&gt;How does the server get the application?&lt;/li&gt;
&lt;li&gt;How are services started reliably?&lt;/li&gt;
&lt;li&gt;How do you observe what’s running?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project forced me to answer those questions directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Version Control as the Starting Point
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm37a7nlv5bger86y9w3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm37a7nlv5bger86y9w3.png" alt="Setting up GitHub Repository" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Setting up the GitHub repository to track infrastructure and application code.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Everything begins with a Git repository. I created a GitHub repo to serve as the single source of truth for the application code and infrastructure configuration. This wasn’t just about backup — it was about traceability. Every change to the system has a history, and that history matters once deployment is involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  🐳 Containerizing the Application
&lt;/h2&gt;

&lt;p&gt;I containerized the application using Docker because consistency is non-negotiable in deployment. Containers ensure the same environment runs locally and on the server, eliminating the “works here, breaks there” problem.&lt;/p&gt;

&lt;p&gt;Docker Compose was used to define how multiple services run together. Instead of starting services manually, Compose allowed me to declare the system: the application, its dependencies, and how they communicate.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Automating Builds with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;After containerizing the application, I introduced &lt;strong&gt;GitHub Actions&lt;/strong&gt; to automate the build process. The goal wasn’t complexity — it was consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fritpxwp4heob92g6th25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fritpxwp4heob92g6th25.png" alt="GitHub Actions Workflow YAML" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Defining the CI/CD pipeline in YAML to automate testing and building.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This step mirrors how production teams treat builds: servers don’t compile code, and deployments don’t depend on local machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpp55w10xrl3r4fb54z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpp55w10xrl3r4fb54z7.png" alt="Successful GitHub Actions Run" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A successful GitHub Actions run: verified, built, and ready to deploy.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📤 Using Docker Hub as an Image Registry
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55pdpbd8d618df5ltlhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55pdpbd8d618df5ltlhe.png" alt="Docker Hub Repository" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pushing the final Docker image to Docker Hub for centralized access.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After building the Docker images, I pushed them to Docker Hub. The registry acts as the bridge between development and production. Instead of copying files or rebuilding images on the server, the EC2 instance simply pulls the exact image it needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  ☁️ Provisioning the Server on AWS EC2
&lt;/h2&gt;

&lt;p&gt;For the infrastructure, I used an AWS EC2 instance. EC2 was a deliberate choice because it exposes you to the fundamentals of cloud computing without abstracting too much away.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkldqrm6ced6gge1lgmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkldqrm6ced6gge1lgmo.png" alt="AWS EC2 Instance Console" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The AWS EC2 Console: Provisioning the virtual server that hosts the application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the instance was created, I installed Docker and Docker Compose directly on the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm52v8cqtbp78ehragh3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm52v8cqtbp78ehragh3r.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A quick docker ps on the EC2 instance confirms that all 6 services—from the DB to the monitoring tools—are Up and healthy.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Deploying the Application on EC2
&lt;/h2&gt;

&lt;p&gt;With Docker installed, the deployment flow was straightforward but intentional. I cloned the repository onto the EC2 instance, pulled the images, and started the services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhqs6kl83pkq9vocogl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhqs6kl83pkq9vocogl.png" alt="Docker Compose Up Command" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The terminal view: Using Docker Compose to spin up the entire multi-container stack.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Monitoring and Observability with Grafana
&lt;/h2&gt;

&lt;p&gt;Deployment isn’t complete if you can’t see what’s happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy6bpqwve9a30w30g1o2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy6bpqwve9a30w30g1o2.png" alt="Grafana Dashboard Overview" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Real-time observability: Monitoring the EC2 instance health via Grafana.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I included Grafana in the stack to introduce observability early. Grafana provides visibility into system metrics like CPU, RAM, and network health.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67ksh7388dmud8tsiuyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67ksh7388dmud8tsiuyj.png" alt="Prometheus Data Source Config" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Configuring the security groups and instance details to allow public access.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 What This Project Changed for Me
&lt;/h2&gt;

&lt;p&gt;This project shifted my mindset from “writing code” to “operating systems.” You stop thinking only about features and start thinking about reliability and repeatability. Breaking things on a real server teaches lessons no tutorial ever will.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you’re learning DevOps or cloud engineering, my advice is simple: deploy something real. Deal with registries, ports, servers, and failures. That’s where the abstractions disappear and understanding begins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/BayoJohn/Project-2.git" rel="noopener noreferrer"&gt;https://github.com/BayoJohn/Project-2.git&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>docker</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How I Automated My First Node.js App Deployment Using AWS App Runner</title>
      <dc:creator>Bayo Ogundele</dc:creator>
      <pubDate>Thu, 15 Jan 2026 18:42:51 +0000</pubDate>
      <link>https://forem.com/bayo_ogundele_b3f16b3c436/how-i-automated-my-first-nodejs-app-deployment-using-aws-app-runner-4lg8</link>
      <guid>https://forem.com/bayo_ogundele_b3f16b3c436/how-i-automated-my-first-nodejs-app-deployment-using-aws-app-runner-4lg8</guid>
      <description>&lt;p&gt;As someone just starting out in &lt;strong&gt;DevOps&lt;/strong&gt; and &lt;strong&gt;cloud deployment&lt;/strong&gt;, I wanted to challenge myself: could I automate the deployment of a Node.js app in a way that’s &lt;strong&gt;repeatable, efficient, and doesn’t require managing servers&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;After some experimentation, I built a small project that demonstrates exactly that, and I’m sharing my journey, the tools I used, and the lessons I learned along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 The Goal
&lt;/h2&gt;

&lt;p&gt;My goal was simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a Node.js application&lt;/li&gt;
&lt;li&gt;Package it in a container&lt;/li&gt;
&lt;li&gt;Push it to a registry&lt;/li&gt;
&lt;li&gt;Deploy it automatically using a serverless service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds straightforward in theory—but as a beginner, I had to learn a lot about &lt;strong&gt;containers, registries, and deployment services&lt;/strong&gt; along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Tools I Used and Why
&lt;/h2&gt;

&lt;p&gt;Here’s a breakdown of each tool and why I chose it:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Node.js
&lt;/h3&gt;

&lt;p&gt;Since I was already familiar with JavaScript, Node.js was an obvious choice for my application. It’s lightweight, easy to set up, and widely supported by cloud platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Fast to develop, easy to containerize, and perfect for small web apps.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Docker
&lt;/h3&gt;

&lt;p&gt;Docker allowed me to &lt;strong&gt;package my app and its dependencies into a container&lt;/strong&gt;, ensuring it would run the same way anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Consistency across environments and seamless integration with AWS App Runner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fuy1i3l9w76t9tln2gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fuy1i3l9w76t9tln2gw.png" alt="Docker Image Build" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  3. AWS Elastic Container Registry (ECR)
&lt;/h3&gt;

&lt;p&gt;ECR is AWS’s container registry, similar to Docker Hub but integrated with AWS. I used it to &lt;strong&gt;store my Docker images&lt;/strong&gt; before deploying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; It integrates seamlessly with AWS App Runner and simplifies authentication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu8w8rzz8to5hn8ncovr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu8w8rzz8to5hn8ncovr.png" alt="Pushed Image Placeholder 1" width="800" height="188"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l6q1go2yp92kpmj3ytk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l6q1go2yp92kpmj3ytk.png" alt="Pushed Image Placeholder 2" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  4. AWS App Runner
&lt;/h3&gt;

&lt;p&gt;This was my first real experience with a serverless deployment service. App Runner lets you &lt;strong&gt;deploy containerized apps without worrying about servers&lt;/strong&gt;. It handles scaling, HTTPS, and load balancing automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Removes operational overhead and is perfect for beginners who want to focus on the app rather than infrastructure.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Shell Scripting (deploy.sh)
&lt;/h3&gt;

&lt;p&gt;To automate the deployment, I wrote a small script that &lt;strong&gt;builds the Docker image, pushes it to ECR, and triggers App Runner to deploy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Automates repetitive tasks and reduces the chance of human error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Even as a beginner, scripting these steps helped me understand the workflow better.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ How It Works
&lt;/h2&gt;

&lt;p&gt;Here’s a simplified overview of the deployment pipeline I built:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Develop the Node.js app locally&lt;/li&gt;
&lt;li&gt;Containerize the app using Docker&lt;/li&gt;
&lt;li&gt;Push the Docker image to AWS ECR&lt;/li&gt;
&lt;li&gt;Trigger AWS App Runner to deploy the new image&lt;/li&gt;
&lt;li&gt;App Runner automatically hosts and scales the app&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even though I’m new to deployment pipelines, having this workflow in place made updating my app as simple as running a single script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50kwgzjlcgylu2acgbyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50kwgzjlcgylu2acgbyw.png" alt="Deployment Pipeline" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🚧 Challenges I Faced as a Beginner
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker authentication issues&lt;/strong&gt; – Learning how AWS ECR login works took some trial and error.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understanding App Runner concepts&lt;/strong&gt; – At first, I wasn’t sure how services, images, and deployments connect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation scripting&lt;/strong&gt; – Writing a script that could handle errors and run in one go was tricky.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite these challenges, I learned a lot about &lt;strong&gt;containerization, cloud hosting, and deployment pipelines&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  📚 What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automation saves time:&lt;/strong&gt; Writing scripts for deployment is much faster than doing everything manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless hosting is beginner-friendly:&lt;/strong&gt; App Runner abstracts complex server management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step-by-step learning works best:&lt;/strong&gt; I started with a small Node.js app and gradually added Docker, ECR, and App Runner.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚀 Next Steps
&lt;/h2&gt;

&lt;p&gt;Now that I have a working deployment pipeline, my plans for improvement include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explore CI/CD integration with GitHub Actions&lt;/strong&gt; – Automate builds and deployments whenever code is pushed to the repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add environment variables and secrets management&lt;/strong&gt; – Make the pipeline more secure and flexible for different environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy more complex applications&lt;/strong&gt; – Test the workflow with larger, multi-service apps to gain deeper experience.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💡 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Even as a beginner, building this project has given me &lt;strong&gt;confidence in cloud deployment and DevOps workflows&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you’re starting out too, I highly recommend experimenting with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Containerized apps&lt;/strong&gt; – Learn how to package and run applications consistently across environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless deployment&lt;/strong&gt; – Focus on your app rather than managing infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s both &lt;strong&gt;educational and rewarding&lt;/strong&gt;, and it gives you a solid foundation to grow in DevOps and cloud engineering.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>beginners</category>
      <category>devops</category>
      <category>node</category>
    </item>
  </channel>
</rss>
