<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oluwajuwon Odunitan</title>
    <description>The latest articles on Forem by Oluwajuwon Odunitan (@jaywon).</description>
    <link>https://forem.com/jaywon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jaywon"/>
    <language>en</language>
    <item>
      <title>Linux Health Sentinel Phase 2: From Metrics to Meanings with Grafana Loki</title>
      <dc:creator>Oluwajuwon Odunitan</dc:creator>
      <pubDate>Thu, 05 Feb 2026 17:17:33 +0000</pubDate>
      <link>https://forem.com/jaywon/linux-health-sentinel-phase-2-from-metrics-to-meanings-with-grafana-loki-407d</link>
      <guid>https://forem.com/jaywon/linux-health-sentinel-phase-2-from-metrics-to-meanings-with-grafana-loki-407d</guid>
      <description>&lt;p&gt;In my last post, I shared how I moved from blindly running commands to seeing my infrastructure breathe through metrics. But as every DevOps learner quickly discovers, &lt;strong&gt;&lt;em&gt;metrics tell you there is a problem; logs tell you what the problem is.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, we’re giving our "Linux Health Sentinel" the ability to listen. We are adding centralised logging using Loki and Promtail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Concept: Metrics vs. Logs&lt;/strong&gt;&lt;br&gt;
If your server’s CPU spikes to 99%, Prometheus will show you a scary red line on a graph. That’s a metric. But why did it spike? Was it a brute-force SSH attack? A memory leak in a script?&lt;/p&gt;

&lt;p&gt;To find out, you need the text records, &lt;em&gt;the logs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture&lt;/strong&gt;&lt;br&gt;
We are adding two new components to our existing setup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Loki (The Library):&lt;/strong&gt; Lives on your laptop. It stores the logs and lets you search them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Promtail (The Spy):&lt;/strong&gt; Lives on the Vagrant VM. It "tails" the log files (like tail -f) and ships them to Loki.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhogbnb2m8yhqf1ufowwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhogbnb2m8yhqf1ufowwg.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu laptop with Grafana + Prometheus running&lt;/li&gt;
&lt;li&gt;Vagrant VM from Phase 1 (or any local VM of your choice)&lt;/li&gt;
&lt;li&gt;Basic networking between host and VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting up the Library (Loki)&lt;/strong&gt;&lt;br&gt;
On your control centre (laptop), we need to get Loki running. Loki is "Prometheus, but for logs." While some package managers have Loki, the safest and most consistent path is using the official binaries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Download and unzip Loki
wget https://github.com/grafana/loki/releases/latest/download/loki-linux-amd64.zip
sudo apt update &amp;amp;&amp;amp; sudo apt install unzip -y
unzip loki-linux-amd64.zip
chmod +x loki-linux-amd64

# Download the default config file
wget https://raw.githubusercontent.com/grafana/loki/main/cmd/loki/loki-local-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run Loki:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./loki-linux-amd64 -config.file=loki-local-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;PS: This setup is for local learning only and runs without authentication. Do not expose Loki directly to the internet.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Deploying the Spy (Promtail)&lt;/strong&gt;&lt;br&gt;
Now, hop into your Vagrant VM. We need an agent to grab those system logs and send them over the network to your laptop.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Promtail:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -O -L "https://github.com/grafana/loki/releases/download/v3.5.9/promtail-linux-amd64.zip"
unzip promtail-linux-amd64.zip
chmod a+x promtail-linux-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure Promtail:&lt;/strong&gt; Download the basic config file to tell Promtail where to send the logs.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://raw.githubusercontent.com/grafana/loki/main/clients/cmd/promtail/promtail-local-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Edit the Config:&lt;/strong&gt; Change the clients URL to your laptop's IP address.
&lt;em&gt;Tip: Use hostname -I on your laptop to find the IP address your VM needs to talk to.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clients:
  - url: http://&amp;lt;YOUR_LAPTOP_IP&amp;gt;:3100/loki/api/v1/push

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      host: vagrant-vm
      __path__: /var/log/*log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run Promtail:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./promtail-linux-amd64 -config.file=promtail-local-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Step 3: Visualisation in Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Grafana (localhost:3000).&lt;/li&gt;
&lt;li&gt;Add Data Source -&amp;gt; Select Loki.&lt;/li&gt;
&lt;li&gt;Set URL to &lt;a href="http://localhost:3100" rel="noopener noreferrer"&gt;http://localhost:3100&lt;/a&gt;. Click Save &amp;amp; Test.&lt;/li&gt;
&lt;li&gt;Go to the Explore tab (compass icon).&lt;/li&gt;
&lt;li&gt;Use the Label Browser to select job="varlogs" or host="vagrant-vm".&lt;/li&gt;
&lt;li&gt;Click Run Query.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5loh5erc1xoz00tizyr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5loh5erc1xoz00tizyr1.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generating Test Logs&lt;br&gt;
You should get some logs. We can also generate some noise on the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo logger "Sentinel Test: Can you hear me, Grafana?"
sudo logger "Hello Loki, this is a test"
sudo logger "Sentinel Alert: Testing log flow to Grafana"
sudo logger -p user.err "Simulating a critical system error"
sudo logger "Hello Loki, this is test-2."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93393zglvcao5btu4pxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93393zglvcao5btu4pxp.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapun5a0txrn18gmh5yfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapun5a0txrn18gmh5yfy.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't expect a &lt;em&gt;"Matrix-style"&lt;/em&gt; scrolling screen immediately! By default, Grafana shows a static snapshot. To see logs fly across your screen in real-time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable "Live" Mode: Look for the Live button in the top right of the Grafana UI.&lt;/li&gt;
&lt;li&gt;Adjust Auto-Refresh: Set the timer to 5s or 10s.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
We have metrics, and we have logs. Our Sentinel is getting smarter. But we still have to look at the screen to know something is wrong.&lt;/p&gt;

&lt;p&gt;Next, we’ll teach it to speak to us on Slack or Discord or email when it detects trouble, with &lt;strong&gt;Alerting.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
      <category>monitoring</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Monitoring and Observing My First Linux Server: A Beginner’s Guide to Prometheus &amp; Grafana.</title>
      <dc:creator>Oluwajuwon Odunitan</dc:creator>
      <pubDate>Tue, 03 Feb 2026 09:18:48 +0000</pubDate>
      <link>https://forem.com/jaywon/monitoring-and-observing-my-first-linux-server-a-beginners-guide-to-prometheus-grafana-c6j</link>
      <guid>https://forem.com/jaywon/monitoring-and-observing-my-first-linux-server-a-beginners-guide-to-prometheus-grafana-c6j</guid>
      <description>&lt;p&gt;Have you ever wondered what’s actually happening inside a Linux server? &lt;br&gt;
As a DevOps engineer and learner, you will eventually move past "just running commands" to actually observing your infrastructure to know what is happening under the hood. &lt;/p&gt;

&lt;p&gt;So I built a simple observability stack using Prometheus and Grafana, which I called "Linux Health Sentinel". This guide walks you through that exact beginner-friendly setup to see how data flows from a raw event to a visual graph.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Monitoring vs. Observability: What's the Difference?&lt;/strong&gt;&lt;br&gt;
Before diving in, it's important to understand these two core concepts in DevOps and cloud environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring: Tells you if a system is working and/or if something is broken (e.g., "Is the CPU over 80%?").&lt;/li&gt;
&lt;li&gt;Observability: helps you understand why. by looking at the data—the "signals"—the system emits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; is a system, not just a tool:&lt;br&gt;
&lt;strong&gt;&lt;em&gt;sensors → data pipeline → dashboard&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The 4 phases of observability&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Phase 1 → Instrumentation (Sensors)&lt;/strong&gt;
Tools that emit signals from infrastructure and apps. (on the target)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 2 → Collection (The Pipeline)&lt;/strong&gt;
Collectors clean, label, and route the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 3 → Storage (The Library)&lt;/strong&gt;
Metrics, logs, and traces live in optimised databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 4 → Visualisation &amp;amp; Alerting&lt;/strong&gt;
Dashboards + alerts turn data into action.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Project Goal&lt;/strong&gt;&lt;br&gt;
Visualise the real-time CPU, memory, and disc health of an Ubuntu VM using the industry-standard Prometheus and Grafana stack.&lt;/p&gt;

&lt;p&gt;🛠️ The Tech Stack&lt;br&gt;
Target → VM&lt;br&gt;
Control Center → Laptop&lt;/p&gt;

&lt;p&gt;To build my sentinel, I used the "Golden Trio" of open-source observability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node Exporter: The "spy/sensor and collector" on the VM (target) that gathers hardware stats.&lt;/li&gt;
&lt;li&gt;Prometheus: The "brain and storage" that pulls data and stores it in a time-series database.&lt;/li&gt;
&lt;li&gt;Grafana: The "face" that turns raw numbers into beautiful, readable dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🚀 Step-by-Step Implementation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1: Set Up the "Target" (VirtualBox VM)&lt;/strong&gt;&lt;br&gt;
This VM represents a production server in a data centre.&lt;br&gt;
a. &lt;strong&gt;Network Setup (Crucial):&lt;/strong&gt; * In VirtualBox, select your VM &amp;gt; &lt;strong&gt;Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Network&lt;/strong&gt;.&lt;br&gt;
    - Change "Attached to" from NAT to &lt;strong&gt;Bridged Adapter&lt;/strong&gt;.&lt;br&gt;
    - &lt;em&gt;Why?&lt;/em&gt; This gives the VM an IP address on your home network so your laptop can "talk" to it.&lt;br&gt;
b. &lt;strong&gt;Start the VM&lt;/strong&gt; and find its IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;I&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note this down; let's assume it is &lt;code&gt;192.168.1.50&lt;/code&gt;&lt;/em&gt;&lt;br&gt;
c. &lt;strong&gt;Install the "Spy" (Node Exporter):&lt;/strong&gt;&lt;br&gt;
Node Exporter is a small tool that translates Linux system stats into a format Prometheus understands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;update&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;exporter&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d. &lt;strong&gt;Verify:&lt;/strong&gt; Open the browser on your &lt;strong&gt;laptop&lt;/strong&gt; and go to &lt;code&gt;http://192.168.1.50:9100/metrics&lt;/code&gt;. If you see a wall of text, the spy is active.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Set Up the "Control Center" (Your Laptop)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your laptop will act as the monitoring server that "pulls" data from the VM.&lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Install Prometheus (The Database):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;update&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. &lt;strong&gt;Configure Prometheus to watch the VM:&lt;/strong&gt; Edit the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scroll to the end and add this "job":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;job_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ubuntu_vm&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="nx"&gt;static_configs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;192.168.1.50:9100&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Save and Exit (Ctrl+O, Enter, Ctrl+X).&lt;/em&gt;&lt;br&gt;
c. &lt;strong&gt;Restart Prometheus:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;systemctl&lt;/span&gt; &lt;span class="nx"&gt;restart&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jvw9hya7pjlomtpivtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jvw9hya7pjlomtpivtx.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Visualise with Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Install Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;wget&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;O&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /usr/share/keyrings/grafana.gpg &amp;gt; /dev/null&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;update&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apt&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;grafana&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;
&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;systemctl&lt;/span&gt; &lt;span class="nx"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="nx"&gt;grafana&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Access Grafana: Open &lt;code&gt;http://localhost:3000&lt;/code&gt; (User/Pass: &lt;code&gt;admin&lt;/code&gt;/&lt;code&gt;admin&lt;/code&gt;).&lt;br&gt;
c. &lt;strong&gt;Connect the dots:&lt;/strong&gt;&lt;br&gt;
    - Go to &lt;strong&gt;Connections&lt;/strong&gt; &amp;gt; &lt;strong&gt;Data Sources&lt;/strong&gt; &amp;gt; &lt;strong&gt;Add Data Source&lt;/strong&gt;.&lt;br&gt;
    - Select &lt;strong&gt;Prometheus&lt;/strong&gt;.&lt;br&gt;
    - Set URL to &lt;code&gt;http://localhost:9090&lt;/code&gt;. Click &lt;strong&gt;Save &amp;amp; Test&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsq0wtrii62iku4jxg5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsq0wtrii62iku4jxg5y.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d. &lt;strong&gt;Import the Pro Dashboard:&lt;/strong&gt;&lt;br&gt;
    - Click the &lt;strong&gt;+&lt;/strong&gt; (top right) &amp;gt; &lt;strong&gt;Import&lt;/strong&gt;.&lt;br&gt;
    - Enter ID &lt;strong&gt;1860&lt;/strong&gt; (this is the community standard for Node Exporter).&lt;br&gt;
    - Select your Prometheus data source and click &lt;strong&gt;Import&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqmrc9lp1vai66r0lakv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqmrc9lp1vai66r0lakv.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 The Result&lt;br&gt;
With just a VM, Prometheus, and Grafana, we built a real observability pipeline.&lt;br&gt;
We can now see CPU, memory, and disk metrics in real time, exactly how production monitoring works in professional environments.&lt;/p&gt;

&lt;p&gt;This small project is the foundation of modern DevOps observability. From here, you can explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;alerting with Alertmanager&lt;/li&gt;
&lt;li&gt;container monitoring&lt;/li&gt;
&lt;li&gt;Kubernetes observability&lt;/li&gt;
&lt;li&gt;logs and distributed tracing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Observability isn’t just about dashboards; it’s about understanding your systems deeply.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>linux</category>
      <category>monitoring</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>🌩️ Cloud Resume Challenge (AWS Edition)</title>
      <dc:creator>Oluwajuwon Odunitan</dc:creator>
      <pubDate>Mon, 27 Oct 2025 13:15:46 +0000</pubDate>
      <link>https://forem.com/jaywon/cloud-resume-challenge-aws-edition-4kf2</link>
      <guid>https://forem.com/jaywon/cloud-resume-challenge-aws-edition-4kf2</guid>
      <description>&lt;h2&gt;
  
  
  ✨ Introduction
&lt;/h2&gt;

&lt;p&gt;Two years ago, I started a simple project to explore Infrastructure as Code (IaC) using Terraform. I recently revisited it with a fresh perspective — not just to complete it, but to dive deeper into managing existing cloud resources with IaC.&lt;/p&gt;

&lt;p&gt;What sparked this? I noticed that many people skip the backend IaC part of the &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/" rel="noopener noreferrer"&gt;Cloud Resume Challenge&lt;/a&gt;, and I wanted to change that.&lt;/p&gt;

&lt;p&gt;This challenge is a hands-on cloud engineering project that helps you build a real-world, serverless web application using AWS, Azure, or GCP. I chose AWS. It’s more than just a static site — it connects frontend, backend, CI/CD, and IaC concepts into one project, with security baked in.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧱 Project Overview
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To begin, you’ll need a domain name (e.g., from &lt;a href="https://namecheap.com" rel="noopener noreferrer"&gt;Namecheap&lt;/a&gt;), an account with a cloud provider, and programmatic access credentials.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosts a personal resume website&lt;/li&gt;
&lt;li&gt;Displays a visitor counter powered by AWS Lambda and DynamoDB&lt;/li&gt;
&lt;li&gt;Uses a CI/CD pipeline for automated testing and deployment&lt;/li&gt;
&lt;li&gt;Secures the site with HTTPS via AWS Certificate Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📐 Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cpbq81t7d75toexddyt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cpbq81t7d75toexddyt.jpg" alt="Project Architecture Diagram" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 Core AWS Components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Amazon S3 (Static Website Hosting)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Website files are uploaded to an S3 bucket.&lt;/li&gt;
&lt;li&gt;Public access is disabled; HTTPS is enabled via CloudFront.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. AWS CloudFront (CDN)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Serves S3 content securely over HTTPS.&lt;/li&gt;
&lt;li&gt;Uses an SSL/TLS certificate from AWS Certificate Manager.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. AWS Certificate Manager (SSL)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides a free SSL/TLS certificate for the domain.&lt;/li&gt;
&lt;li&gt;Attached to the CloudFront distribution for secure access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. AWS Route 53 (DNS)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Manages DNS settings linked to Namecheap.&lt;/li&gt;
&lt;li&gt;Routes traffic to CloudFront via an alias record.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. AWS Lambda + DynamoDB (Visitor Counter)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB stores visitor count.&lt;/li&gt;
&lt;li&gt;A Python-based Lambda function retrieves and increments the count.&lt;/li&gt;
&lt;li&gt;Triggered via Lambda Function URL when the site loads.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This setup is fully serverless — no EC2 instances to manage!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔄 CI/CD Pipeline with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;To automate testing and deployment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On every push to &lt;code&gt;main&lt;/code&gt;, GitHub Actions triggers two workflows:

&lt;ul&gt;
&lt;li&gt;One syncs updated website files to S3.&lt;/li&gt;
&lt;li&gt;The other runs tests on the Lambda function and deploys infrastructure via Terraform (if tests pass).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures fast, reliable deployments with every change.&lt;/p&gt;




&lt;h2&gt;
  
  
  📦 Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;Following the challenge, most people use &lt;strong&gt;ClickOps&lt;/strong&gt; (manual setup via AWS Console) to get started. But when I reached the IaC stage, I realized I’d need to recreate everything in Terraform — even though my site had over 500 views!&lt;/p&gt;

&lt;p&gt;This led me to discover the power of the &lt;code&gt;terraform import&lt;/code&gt; command.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What is &lt;code&gt;terraform import&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;terraform import&lt;/code&gt; brings existing resources under Terraform’s management — even if they weren’t created with Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Syntax:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform import &lt;span class="o"&gt;[&lt;/span&gt;options] ADDRESS ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ADDRESS: Resource block name (e.g., aws_dynamodb_table.cloudResumeViewsTable)&lt;/p&gt;

&lt;p&gt;ID: Unique identifier (e.g., ARN of the resource)&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: Importing DynamoDB Table
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_dynamodb_table" "cloudResumeViewsTable" {
  name           = "cloudResumeViewsTable"
  billing_mode   = "unknown"
  read_capacity  = "unknown"
  write_capacity = "unknown"
  hash_key       = "unknown"

  attribute {
    name = "id"
    type = "S"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Import command&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform import aws_dynamodb_table.cloudResumeViewsTable arn:aws:dynamodb:us-east-1:357078656374:table/cloudResumeViewsTable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command imports the existing DynamoDB table into Terraform’s state, allowing you to manage it as code.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;terraform import&lt;/code&gt; command only imports the resource into the Terraform state file; it does not automatically generate the corresponding configuration in the &lt;code&gt;.tf&lt;/code&gt; file, also the resources block must exist in the &lt;code&gt;.tf&lt;/code&gt; file for the resources to be imported.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧩 Terraform Import Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Migrating Existing Infrastructure: Bring manually created resources under Terraform control.&lt;/li&gt;
&lt;li&gt;Hybrid Environments: Gradually transition resources to IaC.&lt;/li&gt;
&lt;li&gt;Disaster Recovery: Re-import recreated resources after recovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🗺️ Planning Imports for Complex Deployments
&lt;/h2&gt;

&lt;p&gt;Importing resources in a complex deployment can be tedious. You often can’t import everything at once — instead, bring resources into Terraform gradually and intentionally.&lt;br&gt;
A well-drawn architecture diagram helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visualize the system&lt;/li&gt;
&lt;li&gt;Break it into manageable pieces&lt;/li&gt;
&lt;li&gt;Map out a clear import plan&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;Revisiting this project helped me bridge the gap between manual cloud setup and full IaC adoption. By using terraform import, I now manage all resources through code — making future updates safer, faster, and more scalable.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>challenge</category>
      <category>aws</category>
      <category>career</category>
    </item>
  </channel>
</rss>
