<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alex Umansky</title>
    <description>The latest articles on Forem by Alex Umansky (@thebluedrara).</description>
    <link>https://forem.com/thebluedrara</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thebluedrara"/>
    <language>en</language>
    <item>
      <title>How to collect system, service and kernel logs using Alloy, Loki and Grafana.</title>
      <dc:creator>Alex Umansky</dc:creator>
      <pubDate>Sat, 18 Apr 2026 16:44:26 +0000</pubDate>
      <link>https://forem.com/thebluedrara/how-to-collect-system-service-and-kernel-logs-using-alloy-loki-and-grafana-24fk</link>
      <guid>https://forem.com/thebluedrara/how-to-collect-system-service-and-kernel-logs-using-alloy-loki-and-grafana-24fk</guid>
      <description>&lt;h1&gt;
  
  
  Who am I?
&lt;/h1&gt;

&lt;p&gt;Hello dear users, my name is Alex Umansky aka TheBlueDrara, I'm 25 years young and I've been working in a big Neo Cloud HPC Data Center for quite a while now.&lt;/p&gt;

&lt;p&gt;One of my biggest projects is being the eyes and ears of the DC, and as time passes I found myself adding more and more tools to my monitoring stack to be able to see as much data as I can.&lt;/p&gt;

&lt;p&gt;And you guessed it, there are not many guides and detailed documentation out there that would help me accomplish this task.&lt;/p&gt;

&lt;p&gt;So I decided to take the time and share my knowledge and try to make my dear readers' lives easier in setting up their own monitoring system.&lt;/p&gt;

&lt;p&gt;Going forward, I'll be documenting and writing guides about my monitoring projects.&lt;/p&gt;

&lt;p&gt;You can start by viewing my first monitoring guide on how to use SNMP exporter with Prometheus and Grafana to pull server and hardware health state via BMCs in an Out of Band network &lt;a href="https://dev.to/thebluedrara/how-to-monitor-network-device-health-using-snmp-exporter-and-prometheus-1ee"&gt;here&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In this guide I will talk about how to pull &lt;em&gt;system&lt;/em&gt;, &lt;em&gt;service&lt;/em&gt; and &lt;em&gt;kernel&lt;/em&gt; logs from the hosts on the network via in-band networking, using Alloy and Loki as our main stack, and visualize the logs with Grafana.&lt;/p&gt;

&lt;p&gt;We will start by shallow diving into what Alloy and Loki are, our main tools to capture and make the logs usable, and continue on to deploying the tools in our environment in a containerized state.&lt;/p&gt;

&lt;p&gt;So let's not delay any further and jump into the guide.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I will not show how to deploy Grafana, as it's quite basic. To make this guide not too long I will focus only on Alloy and Loki.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;For this setup we will need 3 main tools. I have added a link where you can find the needed image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loki&lt;/strong&gt; — &lt;code&gt;grafana/loki:2.9.17&lt;/code&gt; — &lt;a href="https://hub.docker.com/layers/grafana/loki/2.9.17/images/sha256-62ca46512f854d49ae1c568c01f8619196aac9ee078ba87673e10ab578728246" rel="noopener noreferrer"&gt;image&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alloy&lt;/strong&gt; — &lt;code&gt;grafana/alloy:v1.11.3&lt;/code&gt; — &lt;a href="https://hub.docker.com/layers/grafana/alloy/v1.11.3/images/sha256-5f5e793a194964a0019901c0b0e5d3cee0d9393eb862f0dc6c49c1bc82cb2c1b" rel="noopener noreferrer"&gt;image&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana&lt;/strong&gt; — &lt;code&gt;grafana/grafana:12.0.0&lt;/code&gt; — &lt;a href="https://grafana.com/grafana/download/12.0.0?platform=docker" rel="noopener noreferrer"&gt;image&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The architecture is quite simple: Client-Server, Push-Based module.&lt;/p&gt;

&lt;p&gt;On each node we want to monitor we need to run an Alloy container that will collect our host logs and push the logs to a main Loki server.&lt;/p&gt;

&lt;p&gt;And one main Loki server that will listen to logs being pushed.&lt;/p&gt;

&lt;p&gt;Grafana will query and visualize the logs from Loki.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Host 1 + Alloy] ──┐
[Host 2 + Alloy] ──┼──push──&amp;gt; [Loki] &amp;lt;──query── [Grafana]
[Host N + Alloy] ──┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The How To
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Run a Loki Server
&lt;/h3&gt;

&lt;p&gt;We will start by creating a Loki config file &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This file is responsible for configuring where to store the logs and for how long Loki will store them.&lt;/p&gt;

&lt;p&gt;To make it short and simple, I'll go in general over each block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;auth_enabled: false&lt;/code&gt; — Disables multi-tenancy; uses a default tenant.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;server&lt;/code&gt; — Sets the HTTP port Loki listens on, the default is 3100.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;common&lt;/code&gt; — Shared default configs for all jobs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;schema_config&lt;/code&gt; — Defines how data is indexed and stored from a given date.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;storage_config&lt;/code&gt; — Specifies where chunks (actual log data) are physically written on disk when using filesystem storage. This is important — I recommend creating a volume so it won't get deleted if the container fails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;limits_config&lt;/code&gt; — Per-tenant limits: for example, 7-day retention.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chunk_store_config&lt;/code&gt; — Caps how far back queries can look, preventing reads beyond the retention window.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;auth_enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http_listen_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3100&lt;/span&gt;

&lt;span class="na"&gt;common&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;path_prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/loki&lt;/span&gt;
  &lt;span class="na"&gt;replication_factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;ring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kvstore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;inmemory&lt;/span&gt;

&lt;span class="na"&gt;schema_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2024-01-01&lt;/span&gt;
      &lt;span class="na"&gt;store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;boltdb-shipper&lt;/span&gt;
      &lt;span class="na"&gt;object_store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;filesystem&lt;/span&gt;
      &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v13&lt;/span&gt;
      &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;index_&lt;/span&gt;
        &lt;span class="na"&gt;period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;24h&lt;/span&gt;

&lt;span class="na"&gt;storage_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filesystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/loki&lt;/span&gt;

&lt;span class="na"&gt;limits_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;retention_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;168h&lt;/span&gt;   &lt;span class="c1"&gt;# 7 days&lt;/span&gt;
  &lt;span class="na"&gt;allow_structured_metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;chunk_store_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;max_look_back_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;168h&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now after we created the config file, let's run the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; loki &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 3100:3100 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/config.yaml:/etc/loki/config.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; loki-data:/loki &lt;span class="se"&gt;\&lt;/span&gt;
  grafana/loki:2.9.17 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-config&lt;/span&gt;.file&lt;span class="o"&gt;=&lt;/span&gt;/etc/loki/config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running Alloy on the host
&lt;/h3&gt;

&lt;p&gt;Before running the Alloy container we need to understand how to tell Alloy how and what logs we want to pull. For that we have the &lt;code&gt;config.alloy&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The hardest part in this stack is that the config file is written in a DSL called "River", which is a Grafana config language.&lt;/p&gt;

&lt;p&gt;But there is also a simple solution: you can use &lt;a href="https://grafana.github.io/alloy-configurator/" rel="noopener noreferrer"&gt;this&lt;/a&gt; generator to create a simple config file for your needs.&lt;/p&gt;

&lt;p&gt;For example, you can use this file. I will break it up a little as we need to understand what we are up to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"local_host"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;endpoint&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"http://&amp;lt;LOKI_SERVER_IP&amp;gt;:3100/loki/api/v1/push"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;relabel&lt;/span&gt; &lt;span class="s2"&gt;"journal"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;forward_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__systemd_unit"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service_name"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__transport"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"transport"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal_priority_keyword"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"level"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__hostname"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"host_name"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;journal&lt;/span&gt; &lt;span class="s2"&gt;"read"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;forward_to&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;local_host&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;receiver&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;relabel_rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;relabel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;journal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rules&lt;/span&gt;
  &lt;span class="nx"&gt;labels&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"log_collection"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  loki.write
&lt;/h3&gt;

&lt;p&gt;This block creates an object to where we push the logs. We will need to give it our Loki server DNS or IP address to send the logs.&lt;/p&gt;

&lt;p&gt;You can change the &lt;code&gt;local_host&lt;/code&gt; to anything you like, it's just a label. We will see this nature in the other blocks too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"local_host"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;endpoint&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"http://&amp;lt;Loki_Server_IP&amp;gt;:3100/loki/api/v1/push"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  loki.relabel
&lt;/h3&gt;

&lt;p&gt;This block is all about relabeling.&lt;br&gt;
When Loki receives journal logs, they will have specific initial names that start with &lt;code&gt;__journal_&amp;lt;service_name&amp;gt;&lt;/code&gt; like &lt;code&gt;__journal__systemd_unit&lt;/code&gt;. So to make our life easier, we create certain rules to relabel them into our own groups so we will have a simpler way to query them later.&lt;/p&gt;

&lt;p&gt;Each rule relabels a certain log group into a &lt;code&gt;target_label&lt;/code&gt;. You can change the value to any label you want, e.g. &lt;code&gt;target_label = "ninja"&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;relabel&lt;/span&gt; &lt;span class="s2"&gt;"journal"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;forward_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__systemd_unit"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service_name"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__transport"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"transport"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal_priority_keyword"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"level"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;source_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"__journal__hostname"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;target_label&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"host_name"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  loki.source.journal
&lt;/h3&gt;

&lt;p&gt;The final block is getting the input. It uses the journal's API under the hood to collect our logs.&lt;br&gt;
It configures which block to send the data to, which block is used to relabel our data, and finally just a small label that is added, which will be the main label for all data coming from Alloy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;journal&lt;/span&gt; &lt;span class="s2"&gt;"read"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;forward_to&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;local_host&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;receiver&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;relabel_rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;loki&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;relabel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;journal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rules&lt;/span&gt;
  &lt;span class="nx"&gt;labels&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"log_collection"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After we created our config file, we will need to pull the Alloy image to the host. I will leave this part to you, as it differs by environment.&lt;/p&gt;

&lt;p&gt;I will jump straight into running the container.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;IMPORTANT NOTE! If using automation and deploying too many containers of Alloy at once, it may crash Loki's max streaming amount, so start with one container and then deploy the rest.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To make the Alloy container read the journal logs, we need to mount the journal logs directories and add the container root user to the group ID of the journal group of the host, so it will have permissions.&lt;/p&gt;

&lt;p&gt;So we run this command:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the journald group ID may vary, please give the parameter the correct ID. To find the ID on the host, run:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;getent group systemd-journal | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;: &lt;span class="nt"&gt;-f3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; alloy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt; host &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt; unless-stopped &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-add&lt;/span&gt; &amp;lt;JOURNAL_GROUP_ID&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &amp;lt;Path_To_Config_File&amp;gt;:/etc/alloy/config.alloy:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /run/log/journal:/run/log/journal:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/log/journal:/var/log/journal:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /etc/machine-id:/etc/machine-id:ro &lt;span class="se"&gt;\&lt;/span&gt;
  grafana/alloy:v1.11.3 &lt;span class="se"&gt;\&lt;/span&gt;
  run &lt;span class="nt"&gt;--server&lt;/span&gt;.http.listen-addr&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0:12345 /etc/alloy/config.alloy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify it works
&lt;/h3&gt;

&lt;p&gt;Before moving on to Grafana, let's make sure everything is running as expected.&lt;/p&gt;

&lt;p&gt;On the Loki host, check that Loki is ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://&amp;lt;LOKI_SERVER_IP&amp;gt;:3100/ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get back &lt;code&gt;ready&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;On the Alloy host, check the container logs for any errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs alloy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Alloy is shipping logs successfully, you can confirm Loki is receiving them by querying for our job label:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-G&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"http://&amp;lt;LOKI_SERVER_IP&amp;gt;:3100/loki/api/v1/labels"&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;job
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all three checks pass, you're good to go.&lt;/p&gt;




&lt;h3&gt;
  
  
  Grafana
&lt;/h3&gt;

&lt;p&gt;From now on, you can set Loki as a Grafana data source and create dashboards for the logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Thank You
&lt;/h2&gt;

&lt;p&gt;And that's a wrap!&lt;/p&gt;

&lt;p&gt;Thank you so much for taking the time to read my guide — it really means a lot. I hope it saved you some of the headache I went through figuring this out, and that you now have a working Alloy + Loki stack pulling logs from your hosts.&lt;/p&gt;

&lt;p&gt;If you spotted something that could be improved, have a question, or just want to share how your own monitoring setup looks, I'd love to hear from you in the comments.&lt;/p&gt;

&lt;p&gt;Stay tuned — more monitoring guides are on the way.&lt;/p&gt;

&lt;p&gt;— Alex (TheBlueDrara)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>tutorial</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to Monitor Network Device Health Using SNMP Exporter and Prometheus</title>
      <dc:creator>Alex Umansky</dc:creator>
      <pubDate>Sun, 16 Nov 2025 17:22:55 +0000</pubDate>
      <link>https://forem.com/thebluedrara/how-to-monitor-network-device-health-using-snmp-exporter-and-prometheus-1ee</link>
      <guid>https://forem.com/thebluedrara/how-to-monitor-network-device-health-using-snmp-exporter-and-prometheus-1ee</guid>
      <description>&lt;h2&gt;
  
  
  The Art of Monitoring Your Network
&lt;/h2&gt;

&lt;p&gt;Hello dear tech priest readers. My name is Alex Umansky, aka TheBlueDrara, and I welcome you to a small and simple guide I made about monitoring the different devices on my network.&lt;/p&gt;

&lt;p&gt;This guide was inspired by a challenge I had to overcome on one of my projects.  &lt;/p&gt;

&lt;p&gt;I have been tasked to monitor the health state of different devices like servers and switches on my network using Prometheus and Grafana as my main monitoring tools.&lt;/p&gt;

&lt;p&gt;So let’s jump right into the fun.&lt;/p&gt;




&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;To begin with, let's start with a short overview of what we are going to learn here.&lt;/p&gt;

&lt;p&gt;We will start by shallow diving into what the SNMP protocol is, and since we are using Prometheus as part of our stack for monitoring, we will focus on learning how to use and deploy the SNMP exporter with which we will be able to pull the needed metrics from our devices.&lt;/p&gt;




&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;For the pre-setup, we will need to install and acquire some tools and resources for our work.&lt;br&gt;&lt;br&gt;
We will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;SNMP_exporter Docker image, you can get it &lt;a href="https://hub.docker.com/r/prom/snmp-exporter" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SNMP_generator, using this GitHub repository and create the image from the Dockerfile &lt;a href="https://github.com/prometheus/snmp_exporter" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Prometheus&lt;/li&gt;
&lt;li&gt;Grafana&lt;/li&gt;
&lt;li&gt;snmp-mibs-downloader&lt;/li&gt;
&lt;li&gt;snmp&lt;/li&gt;
&lt;li&gt;git
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y docker-ce snmp-mibs-downloader snmp git
docker pull prom/snmp-generator:v0.29.0
docker pull prom/snmp-exporter:v0.28.0
docker pull prom/prometheus:v3.5.0 
docker pull grafana/grafana:12.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Prometheus - monitoring system that collects metrics Grafana - data visualization tool&lt;br&gt;&lt;br&gt;
SNMP exporter - acts as the SNMP manager, sends requests to devices, and collects data&lt;br&gt;&lt;br&gt;
SNMP generator - helps us create the SNMP exporter config file using a textual format&lt;br&gt;&lt;br&gt;
snmp-mibs-downloader - allows us to install the necessary MIB files&lt;br&gt;&lt;br&gt;
snmp - a utility tool for commands like snmpwalk  &lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Before we begin
&lt;/h2&gt;

&lt;p&gt;Before we jump into the setup, it is important to understand how the flow of our tools works.&lt;br&gt;&lt;br&gt;
I promise that I will cover every detail needed in the future, but for now, let's see the flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grafana =&amp;gt; prometheus =&amp;gt; snmp_exporter =&amp;gt; hardware devices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use Grafana to visualize the data that Prometheus collected.&lt;br&gt;&lt;br&gt;
Prometheus will scrape data from the SNMP exporter that acts as an SNMP manager sending SNMP requests to its target agents (hardware devices).&lt;/p&gt;

&lt;p&gt;Again, don't worry, everything will become clearer in the next parts.&lt;/p&gt;

&lt;p&gt;Let's break everything up and begin!&lt;/p&gt;

&lt;p&gt;In this guide, I won't dive deep into what the SNMP protocol is. For the sake of understanding, I'll simplify the explanation, but if you want to dive deeper into the protocol, there are amazing documents about it that you can read &lt;a href="https://uptrace.dev/glossary/snmp-monitoring" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  So what is SNMP?
&lt;/h3&gt;

&lt;p&gt;SNMP is quite an old protocol that, I quote, "created a universal language allowing IT teams to decode the operational state of network hardware regardless of manufacturer."&lt;/p&gt;

&lt;p&gt;In simple words, we can use this protocol to pull specific metrics and data from our hardware devices regardless of the different vendors.&lt;/p&gt;

&lt;p&gt;It works in a server-client architecture, where there is an SNMP manager (our SNMP exporter) and SNMP agents that come almost always preinstalled on modern hardware and just need to be enabled.&lt;/p&gt;


&lt;h3&gt;
  
  
  SNMP Agent
&lt;/h3&gt;

&lt;p&gt;You need to check for each vendor how to enable SNMPv3 on the device, and you should create a read-only separate user for SNMPv3 metrics and use their credentials for authentication. &lt;/p&gt;

&lt;p&gt;Else we won’t be able to pull metrics from the device.&lt;/p&gt;


&lt;h3&gt;
  
  
  What is SNMP Exporter?
&lt;/h3&gt;

&lt;p&gt;A tool that translates data from network devices using SNMP into a format that Prometheus can understand.&lt;br&gt;&lt;br&gt;
We will need it if we want to use Prometheus as our monitoring tool.&lt;/p&gt;

&lt;p&gt;The beauty of this exporter is that you can run the service in one place and just give it a target list of your hardware devices.&lt;br&gt;&lt;br&gt;
The manager will send requests to the agents to collect the data.&lt;/p&gt;

&lt;p&gt;The SNMP exporter has two main files that we need to create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;snmp.yml - config file that defines which metrics the SNMP exporter should scrape
&lt;/li&gt;
&lt;li&gt;target_list.yml - a list of targets to scrape (hardware devices, switches, servers, etc.)&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  How to create snmp.yml file
&lt;/h3&gt;

&lt;p&gt;This file holds the configuration that tells the SNMP exporter what metrics we want to pull from the agents.&lt;br&gt;&lt;br&gt;
The issue is that this file needs to be written with OIDs for best results.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note, OIDs are a numeric tree structure. Each OID is unique and represents a single metric that can be pulled from a device.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To handle this problem, we will use the official SNMP generator.&lt;/p&gt;

&lt;p&gt;It will take a bit of setup, but it should be simple.&lt;/p&gt;


&lt;h3&gt;
  
  
  How to set up the SNMP generator
&lt;/h3&gt;

&lt;p&gt;Start by pulling the official generator repository and building the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/prometheus/snmp_exporter.git
cd snmp_exporter/generator
docker build -t &amp;lt;image_name&amp;gt; -f generator/Dockerfile .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to provide the container with two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;generator.yaml&lt;/code&gt; file
&lt;/li&gt;
&lt;li&gt;The MIB files
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note, MIBs are text files that help us translate human-readable formats into OIDs.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Creating generator.yaml file
&lt;/h3&gt;

&lt;p&gt;The generator config file is a YAML file that configures what metrics we want to pull that is written in a human-readable text format.&lt;br&gt;&lt;br&gt;
Here we just give it a whole module to "walk" (scrape).&lt;br&gt;&lt;br&gt;
We can also specify which exact metrics we want from the module.&lt;/p&gt;

&lt;p&gt;In this part, you start to shine — pull the exact metrics you need for your project.&lt;/p&gt;

&lt;p&gt;For a simple example, we will use the IF-MIB module.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IF-MIB: network interfaces. SNMPv2-MIB, SNMPv2-SMI, SNMPv2-TC: standard objects, types, counters HOST-RESOURCES-MIB: CPU load, memory, storage.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ./snmp_exporter/generator
vim generator.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, I want to pull two metrics from the IF-MIB module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ifOperStatus
ifAdminStatus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The status of admin ports and operational status of ports of a switch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  if_mib:
    walk:
      - IF-MIB::ifXTable
    overrides:
      IF-MIB::ifOperStatus:          { ignore: false }
      IF-MIB::ifAdminStatus:         { ignore: false }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Downloading MIBs
&lt;/h3&gt;

&lt;p&gt;Now that we have our generator.yaml file, let's install our MIBs.&lt;/p&gt;

&lt;p&gt;For that, we will use the &lt;code&gt;snmp-mibs-downloader&lt;/code&gt; package.&lt;/p&gt;

&lt;p&gt;Run this command to install the default MIBs (there are vendor-specific MIBs with specific metrics).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;download-mibs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will download the MIBs we need to this path: &lt;code&gt;/usr/share/snmp/mibs/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now let's generate our snmp.yml file.&lt;/p&gt;

&lt;p&gt;We will create a directory that will contain our exact MIBs that we used in the generator file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /snmp_exporter/generator
mkdir -p mibs
cp &amp;lt;MIBS&amp;gt; /mibs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Running the container
&lt;/h3&gt;

&lt;p&gt;We will mount the two files to the container and configure two environment variables,&lt;/p&gt;

&lt;p&gt;Which MIBs we used and where they are located in the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -v "$PWD:/work" -w /work -v "$PWD/mibs:/mibs:ro" -e MIBDIRS="/mibs" -e MIBS="&amp;lt;MIBS&amp;gt;" snmp-gen generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should now get an output of an snmp.yml file.&lt;/p&gt;

&lt;p&gt;If not, the container will show an error and what is missing to complete the task.&lt;/p&gt;




&lt;h3&gt;
  
  
  Choosing SNMP version
&lt;/h3&gt;

&lt;p&gt;We are not done yet! One small last thing — what version of SNMP should we use?&lt;/p&gt;

&lt;p&gt;To choose what version we will use, we will need to set up an authentication code for our snmp.yml file.&lt;/p&gt;

&lt;p&gt;In this guide, we will use SNMPv3 as it’s more secure.&lt;br&gt;&lt;br&gt;
For that, we need to add a small block of code to our snmp.yml file for authentication to our device.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;auths:
  switches: # the name of the auth
    version: 3
    username: '&amp;lt;switch_username&amp;gt;'
    security_level: authPriv
    auth_protocol: SHA
    password: '&amp;lt;switch_password&amp;gt;'
    priv_protocol: AES
    priv_password: &amp;lt;PrivateProtocol_password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Configuring target_list.yaml file
&lt;/h3&gt;

&lt;p&gt;After we finally have the snmp.yaml file, we need a target list — which devices do we want to scrape?&lt;br&gt;&lt;br&gt;
In the target list, we will give a list of targets, what module we used, and what authentication from the snmp.yml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- targets: &amp;amp;switches_targets
    - 192.168.0.10
  labels: { module: 'if_mib', auth: 'switches' }
- targets: *switches_targets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Prometheus task
&lt;/h3&gt;

&lt;p&gt;For Prometheus, we need to create a config file that will configure a scrape task from its exporters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'snmp'
    scrape_interval: 30s
    scrape_timeout: 25s
    metrics_path: /snmp
    file_sd_configs:
      - files:
          - /prometheus/snmp_targets_list.yaml
        refresh_interval: 1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A small recap of what we did so far and how everything comes together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We installed the tools we need
&lt;/li&gt;
&lt;li&gt;We pulled all the necessary Docker images
&lt;/li&gt;
&lt;li&gt;We generated an snmp.yml using a generator image; for that, we downloaded the MIBs and configured a generator.yaml for our needs
&lt;/li&gt;
&lt;li&gt;We created a target list for the SNMP exporter
&lt;/li&gt;
&lt;li&gt;We enabled SNMPv3 on our hardware and created a read-only user for SNMPv3 metrics
&lt;/li&gt;
&lt;li&gt;We created a Prometheus scraping task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's deploy all our services and make the config files take effect.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Using Docker Compose, let's deploy our services with ease.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  prometheus:
    image: prom/prometheus:v3.5.0
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - ./prometheus-data:/prometheus
    networks:
      - monitoring
    restart: always

  grafana:
    image: grafana/grafana:12.0.0
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-storage:/var/lib/grafana
    networks:
      - monitoring
    restart: always

  snmp-exporter:
    image: prom/snmp-exporter:v0.28.0
    container_name: snmp-exporter
    network_mode: "host"
    restart: always
    volumes:
      - ./snmp.yml:/etc/snmp_exporter/snmp.yml:ro
    command: --config.file=/etc/snmp_exporter/snmp.yml

volumes:
  grafana-storage:

networks:
  monitoring:
    external: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note, that SNMP does not work well with NAT, so in Docker Compose we set it to use Host network mode.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we will be able to reach promethuies UI by searching&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;IP_of_service&amp;gt;:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we should see that the task we created able to pull the metrics we configured.&lt;/p&gt;




&lt;h3&gt;
  
  
  How to test if we enabled the SNMP agent
&lt;/h3&gt;

&lt;p&gt;We can use the MIBs to run commands with the Net-SNMP tool like "snmpwalk" to scrape a device manually.&lt;/p&gt;

&lt;p&gt;For that, we need to configure the path to the MIBs.&lt;/p&gt;

&lt;p&gt;Edit this config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/snmp/snmp.conf (system-wide defaults)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And add these lines.&lt;/p&gt;

&lt;p&gt;They point to the MIBs directory and specify which MIBs are present and should be used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mibs +All
mibdirs /usr/share/snmp/mibs:/usr/share/snmp/mibs/ietf:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/net-snmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we did everything correctly, we should be able to visit Prometheus and see the task pulling metrics that we configured from our devices.&lt;/p&gt;

&lt;p&gt;An easy way to manually check if you can pull data from a device is by running an &lt;code&gt;snmpwalk&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;snmpwalk -v3 -u &amp;lt;switch_username&amp;gt; -l authPriv -a SHA -A '&amp;lt;switch_password&amp;gt;' -x AES -X '&amp;lt;PrivateProtocol_password&amp;gt;' &amp;lt;IP_of_device&amp;gt; -m &amp;lt;Module_name&amp;gt; #IF-MIB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a long output of different metrics from the device.&lt;/p&gt;




&lt;h3&gt;
  
  
  Grafana
&lt;/h3&gt;

&lt;p&gt;I will not go into building dashboards as it's an art of its own, and this guide is long enough.&lt;/p&gt;




&lt;p&gt;I hope you enjoyed this guide. Feel free to comment and give me some feedback to keep learning and improving.&lt;/p&gt;

&lt;p&gt;Sharing my small research and how I overcame challenges is a small gesture to the open-source community on my part, and it brings me a bit of joy.&lt;/p&gt;

</description>
      <category>networking</category>
      <category>monitoring</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Linux Repository Mirroring Made Simple</title>
      <dc:creator>Alex Umansky</dc:creator>
      <pubDate>Sun, 07 Sep 2025 20:06:25 +0000</pubDate>
      <link>https://forem.com/thebluedrara/linux-repository-mirroring-made-simple-4gm2</link>
      <guid>https://forem.com/thebluedrara/linux-repository-mirroring-made-simple-4gm2</guid>
      <description>&lt;h2&gt;
  
  
  Mirror, mirror on the wall, apt-get won’t fail me at all
&lt;/h2&gt;

&lt;p&gt;Hello dear reader, and as &lt;a class="mentioned-user" href="https://dev.to/silent_mobius"&gt;@silent_mobius&lt;/a&gt; refers to, &lt;em&gt;gentle readers&lt;/em&gt;, welcome!&lt;/p&gt;

&lt;p&gt;My name is Alex Umansky, aka TheBlueDrara, and I welcome you to a small and simple guide I wrote on Linux repository mirroring.&lt;/p&gt;

&lt;p&gt;This guide was inspired by a task I received in a project where I had to localize many packages in my environment, as sooner or later the internet on my poor, poor Linux server would be cut off.&lt;/p&gt;

&lt;p&gt;So here we are — shall we begin?&lt;/p&gt;




&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;For starters, let’s understand what we are going to do here.&lt;br&gt;&lt;br&gt;
We will be creating a mirror of the official Ubuntu repository, so we can install packages in an offline environment.&lt;/p&gt;

&lt;p&gt;And since not all packages come from the official Ubuntu repository, some packages will need to be downloaded manually and stored in our own repo — a so-called “flat” repository.&lt;/p&gt;

&lt;p&gt;I’ll explain the difference between them in detail in the upcoming steps.&lt;/p&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, we need to install some prerequisites to be able to mirror repositories and expose our local mirror so we can pull packages from our mirror server.&lt;/p&gt;

&lt;p&gt;We will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt-utils&lt;/code&gt; (to generate the files we need)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nginx&lt;/code&gt; (a web server to expose our local mirror)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;debmirror&lt;/code&gt; (a tool to create a local Debian/Ubuntu mirror)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gnupg&lt;/code&gt; (to create and manage GPG keys)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-utils nginx debmirror gnupg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;We’ll start with the official Ubuntu mirror.&lt;br&gt;&lt;br&gt;
We’ll use &lt;strong&gt;debmirror&lt;/strong&gt;, a tool that mirrors a remote repo using parameters and an upstream URL.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: This method works not only with the Ubuntu repository but also with other vendor repositories, as long as you have the repo URL and the GPG key.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;Before mirroring, it’s useful to understand how Debian/Ubuntu repos are built.&lt;br&gt;&lt;br&gt;
They have a special file structure that we are keen to learn.&lt;/p&gt;

&lt;p&gt;They are organized into &lt;strong&gt;Suites&lt;/strong&gt; and &lt;strong&gt;Components&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Suites&lt;/strong&gt; are distributions or releases of packages, for example: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;noble&lt;/code&gt; → The base release of Ubuntu 24.04&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;noble-updates&lt;/code&gt; → Stable updates after the initial 24.04 release
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt; are sections grouped by license/support status, such as:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;main&lt;/code&gt; → Official Canonical-supported open source
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;restricted&lt;/code&gt; → Proprietary drivers/firmware
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;universe&lt;/code&gt; → Community-maintained software
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;multiverse&lt;/code&gt; → Non-free software
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re unsure what packages you need and have the storage space, you can simply mirror them all — but keep in mind the whole repo may take around ~2 TB of storage space.&lt;/p&gt;

&lt;p&gt;Sometimes it’s better to start with a small mirror of the &lt;code&gt;main&lt;/code&gt; component of each suite and see if you’re not missing any needed dependencies.&lt;/p&gt;


&lt;h2&gt;
  
  
  Creating a Debian Repository
&lt;/h2&gt;

&lt;p&gt;Let’s start by creating a directory to store our repository structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /storage/ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The official Ubuntu repo URL we will use is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://archive.ubuntu.com/ubuntu/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we’ll mirror the &lt;code&gt;noble&lt;/code&gt; and &lt;code&gt;noble-updates&lt;/code&gt; suites, and only the &lt;code&gt;main&lt;/code&gt; component from each one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;debmirror /storage/ubuntu &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nosource&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--progress&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;archive.ubuntu.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--root&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dist&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noble,noble-updates &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--section&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ignore-release-gpg&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: For safer practice, remove &lt;code&gt;--ignore-release-gpg&lt;/code&gt; and instead add the vendor’s GPG key to &lt;code&gt;/usr/share/keyrings&lt;/code&gt; so signatures are verified.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The mirroring process may take a while depending on what you mirrored.&lt;/p&gt;




&lt;h2&gt;
  
  
  Repository File Structure
&lt;/h2&gt;

&lt;p&gt;Before we head forward, it is important to understand the file structure of a Debian repo.&lt;/p&gt;

&lt;p&gt;Once finished downloading, you’ll see three main directories:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;project&lt;/code&gt; → global metadata (mostly Debian-specific, so we will set this aside for now)
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dists&lt;/code&gt; → contains the suites like &lt;code&gt;noble&lt;/code&gt;, &lt;code&gt;noble-updates&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pool&lt;/code&gt; → contains all the &lt;code&gt;.deb&lt;/code&gt; packages
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;pool&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;A directory that holds all of our &lt;code&gt;.deb&lt;/code&gt; packages.&lt;br&gt;&lt;br&gt;
They can be sorted alphabetically in directories or simply stored together.&lt;/p&gt;

&lt;p&gt;It doesn’t matter for APT — it only depends on your preference, nice and tidy like the Inquisition of Mankind, or chaotic like the Chaos Warbands.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;dists&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This directory contains:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;suite directories (different releases like &lt;code&gt;noble/&lt;/code&gt;), and inside each suite are component directories like &lt;code&gt;main/&lt;/code&gt;, &lt;code&gt;restricted/&lt;/code&gt;, etc.
&lt;/li&gt;
&lt;li&gt;repository metadata files: &lt;code&gt;Packages&lt;/code&gt;, &lt;code&gt;Release&lt;/code&gt;, &lt;code&gt;InRelease&lt;/code&gt;, &lt;code&gt;Release.gpg&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  The Heart of the APT Repo
&lt;/h2&gt;

&lt;p&gt;As said before, inside each &lt;code&gt;dists&lt;/code&gt; directory there are metadata files: &lt;code&gt;Packages&lt;/code&gt;, &lt;code&gt;Release&lt;/code&gt;, &lt;code&gt;InRelease&lt;/code&gt;, &lt;code&gt;Release.gpg&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;These files are the beating heart of the repo. They are what make APT come alive.  &lt;/p&gt;

&lt;p&gt;Like my most favorite album, &lt;em&gt;Holy Diver&lt;/em&gt; by Ronnie James Dio (bless his soul):&lt;br&gt;&lt;br&gt;
&lt;strong&gt;“Between the velvet lies, there’s a truth as hard as steel.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s find our truth.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;Packages&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;An index file that lists all &lt;code&gt;.deb&lt;/code&gt; packages with:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;names, file paths, checksums, dependencies
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This file tells the repo what &lt;code&gt;.deb&lt;/code&gt; packages exist, what they depend on, and where they are located in the repo.&lt;/p&gt;

&lt;p&gt;If you add or remove packages from the &lt;code&gt;pool&lt;/code&gt; directory, you must recreate the &lt;code&gt;Packages&lt;/code&gt; file to re-index the packages.&lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;Release&lt;/code&gt; file
&lt;/h4&gt;

&lt;p&gt;Summarizes the repository:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;checksums of &lt;code&gt;.deb&lt;/code&gt; Packages
&lt;/li&gt;
&lt;li&gt;defined distributions/components that exist in the repo
&lt;/li&gt;
&lt;li&gt;references to the index files (like &lt;code&gt;Packages&lt;/code&gt;)
&lt;/li&gt;
&lt;li&gt;contains metadata of the repo
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every change in the repo should be accompanied by regenerating the &lt;code&gt;Release&lt;/code&gt; file.&lt;/p&gt;


&lt;h4&gt;
  
  
  &lt;code&gt;InRelease&lt;/code&gt; and &lt;code&gt;Release.gpg&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Signed versions of the &lt;code&gt;Release&lt;/code&gt; file:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Release.gpg&lt;/code&gt; → older detached signing method
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;InRelease&lt;/code&gt; → newer inline signing method
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize this important section: each time you run &lt;code&gt;apt-get update&lt;/code&gt;, you are pulling two files from a remote repo: the signed &lt;code&gt;InRelease&lt;/code&gt; and the &lt;code&gt;Packages&lt;/code&gt; files — so you can install the packages inside those repos.&lt;/p&gt;


&lt;h2&gt;
  
  
  Exposing the Local Repo
&lt;/h2&gt;

&lt;p&gt;If we want to be able to pull packages from our mirror, we need to make the repo available to others.&lt;br&gt;&lt;br&gt;
In this example we’ll expose it with Nginx.&lt;/p&gt;

&lt;p&gt;Let’s create a config file for Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vim /etc/nginx/sites-available/ubuntu-mirror
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/ubuntu&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;alias&lt;/span&gt; &lt;span class="n"&gt;/storage/ubuntu&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;autoindex&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;try_files&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt;&lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="s"&gt;@notfound&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/my-local-repo&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;alias&lt;/span&gt; &lt;span class="n"&gt;/storage/my-local-repo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;autoindex&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;try_files&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt;&lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="s"&gt;@notfound&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="s"&gt;@notfound&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable it and restart the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /etc/nginx/sites-available/ubuntu-mirror /etc/nginx/sites-enabled/
systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the repo is available and can be sourced. We’ll touch on how to source it soon.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating a GPG Key
&lt;/h2&gt;

&lt;p&gt;Signing your repo is not strictly necessary, but it’s a nice practice for automation and branding your repo.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you don’t want to do this step, you can skip ahead.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To sign the repo with our own key, we use the &lt;code&gt;gnupg&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;Generate a key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpg &lt;span class="nt"&gt;--gen-key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the key to see the PUB KEY ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpg &lt;span class="nt"&gt;--list-keys&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Signing the Repo
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember: in each dist (suite) there are its own metadata files you need to sign.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpg &lt;span class="nt"&gt;--local-user&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;PUB_KEY_ID&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;--clearsign&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; dists/&amp;lt;dist_name&amp;gt;/InRelease dists/&amp;lt;dist_name&amp;gt;/Release

gpg &lt;span class="nt"&gt;--local-user&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;PUB_KEY_ID&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;--detach-sign&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; dists/&amp;lt;dist_name&amp;gt;/Release.gpg dists/&amp;lt;dist_name&amp;gt;/Release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Export the key into a file so we can pass it to the clients:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpg &lt;span class="nt"&gt;--armor&lt;/span&gt; &lt;span class="nt"&gt;--export&lt;/span&gt; &amp;lt;KEY_PUB_ID&amp;gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &amp;lt;local_repo_key_name&amp;gt;.asc

&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/&amp;lt;local_repo_key_name&amp;gt;.gpg &amp;lt;local_repo_key_name&amp;gt;.asc

&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;644 /usr/share/keyrings/&amp;lt;local_repo_key_name&amp;gt;.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And… you sold your soul to the devil — congrats!&lt;br&gt;&lt;br&gt;
Let’s continue with the great work so far.&lt;/p&gt;


&lt;h2&gt;
  
  
  Client Setup
&lt;/h2&gt;

&lt;p&gt;When you run &lt;code&gt;apt-get update&lt;/code&gt;, how does APT know where to pull metadata files from?&lt;/p&gt;

&lt;p&gt;Quite simply — there is a source file that APT reads.&lt;/p&gt;

&lt;p&gt;On the client, let’s create or edit a source file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vim /etc/apt/sources.list.d/ubuntu.sources
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the config of our mirrored repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Types: deb
URIs: http://&amp;lt;IP_or_DNS_of_mirror_server&amp;gt;/ubuntu/
Suites: noble noble-updates
Components: main
Architectures: amd64
Signed-By: /usr/share/keyrings/&amp;lt;key_name&amp;gt;.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Place the GPG key where APT expects it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; &amp;lt;my_local_repo_key&amp;gt;.gpg /usr/share/keyrings/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update APT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;APT stores the metadata it pulls in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/var/lib/apt/lists
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check if packages are present in the repo with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-cache search &amp;lt;package_name&amp;gt;
apt-cache show &amp;lt;package_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the newer OS versions you should use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt show
apt search
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: make sure there is network connectivity to the server running the local repo.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Creating a Flat Repository
&lt;/h2&gt;

&lt;p&gt;But what if I want to create a local stash of random &lt;code&gt;.deb&lt;/code&gt; packages I need from vendors, tools, or my own creations?  &lt;/p&gt;

&lt;p&gt;A flat repo is your answer.&lt;/p&gt;

&lt;p&gt;A flat repo is super simple: just a directory that contains &lt;code&gt;.deb&lt;/code&gt; packages plus the metadata files we learned about before (&lt;code&gt;Packages&lt;/code&gt; and &lt;code&gt;Release&lt;/code&gt;), all in one place.&lt;/p&gt;

&lt;p&gt;Messy? A bit. But it works.&lt;/p&gt;




&lt;h3&gt;
  
  
  Create &lt;code&gt;Packages&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Because this is a flat repo, we don’t get ready-made metadata. We need to create it ourselves.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: these steps also work for a Debian repo, just with different paths.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &amp;lt;flat_repo_dir&amp;gt;
apt-ftparchive packages &amp;lt;flat_repo_dir&amp;gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &amp;lt;flat_repo_dir&amp;gt;/Packages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a Debian repo (pool-based):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &amp;lt;debian_repo_dir&amp;gt;
apt-ftparchive packages ./pool | &lt;span class="nb"&gt;tee &lt;/span&gt;dists/&amp;lt;suite&amp;gt;/&amp;lt;component&amp;gt;/binary-&amp;lt;Arch&amp;gt;/Packages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes we need to compress the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="nt"&gt;-9c&lt;/span&gt; Packages &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Packages.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Create &lt;code&gt;Release&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;I like to create a template I can edit later for ease of use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; release.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
APT::FTPArchive::Release {
  Origin "&amp;lt;Custom_Origin&amp;gt;";
  Label  "&amp;lt;Custom_Label&amp;gt;";
  Suite  "&amp;lt;Custom_Suite&amp;gt;";
  Version "&amp;lt;Version&amp;gt;";
  Codename "&amp;lt;Custom_Codename&amp;gt;";
  Architectures "&amp;lt;Architecture_for_example_amd64&amp;gt;";
  Components "main";
  Description "&amp;lt;Whatever_you_wish&amp;gt;";
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now generate the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &amp;lt;flat_repo_dir&amp;gt;
apt-ftparchive &lt;span class="nt"&gt;-c&lt;/span&gt; &amp;lt;Path_to_release.conf&amp;gt; release &amp;lt;path_to_root_directory&amp;gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &amp;lt;path_to_root_directory&amp;gt;/Release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sign it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpg &lt;span class="nt"&gt;--local-user&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;PUB_KEY_ID&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;--clearsign&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; InRelease Release
gpg &lt;span class="nt"&gt;--local-user&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;PUB_KEY_ID&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;--detach-sign&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; Release.gpg Release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Exposing a Flat Repo
&lt;/h2&gt;

&lt;p&gt;In the first step we already added to Nginx the configuration to expose this local flat repo. &lt;/p&gt;




&lt;h2&gt;
  
  
  Client Setup for Flat Repo
&lt;/h2&gt;

&lt;p&gt;The source file looks a little different here.&lt;br&gt;&lt;br&gt;
We remove the component part, and for the suite we just mention &lt;code&gt;./&lt;/code&gt; as the root.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t forget to add your repo key!&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vim /etc/apt/sources.list.d/flatrepo.sources
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Types: deb
URIs: http://&amp;lt;IP_or_DNS_of_repo&amp;gt;/&amp;lt;Repo_name&amp;gt;
Suites: ./
Architectures: amd64
Signed-By: /usr/share/keyrings/&amp;lt;Key_name&amp;gt;.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update APT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;I hope my work and writing helped a hopeless soul somewhere in the vast DevOps universe. Thank you for taking the time to read my work!&lt;/p&gt;

&lt;p&gt;TheBlueDrara&lt;/p&gt;

</description>
      <category>linux</category>
      <category>learning</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
