<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dash0</title>
    <description>The latest articles on Forem by Dash0 (@dash0).</description>
    <link>https://forem.com/dash0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dash0"/>
    <language>en</language>
    <item>
      <title>OpenTelemetry Filelog Receiver: A Guide to Ingesting Log Files</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Tue, 02 Dec 2025 13:38:46 +0000</pubDate>
      <link>https://forem.com/dash0/opentelemetry-filelog-receiver-a-guide-to-ingesting-log-files-38m6</link>
      <guid>https://forem.com/dash0/opentelemetry-filelog-receiver-a-guide-to-ingesting-log-files-38m6</guid>
      <description>&lt;p&gt;Even in the age of cloud-native apps and distributed tracing, plain old log files remain one of the richest sources of truth in any system. From legacy business applications and batch jobs to &lt;a href="https://www.dash0.com/guides/nginx-logs" rel="noopener noreferrer"&gt;NGINX&lt;/a&gt;, databases, and on-prem infrastructure, critical diagnostics still end up written to disk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver" rel="noopener noreferrer"&gt;The OpenTelemetry Collector filelog receiver&lt;/a&gt; gives you a way to bring those logs into a &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;modern observability pipeline&lt;/a&gt;. It continuously tails files, parses their contents, and converts raw text into &lt;a href="https://www.dash0.com/knowledge/opentelemetry-logging-explained" rel="noopener noreferrer"&gt;structured OpenTelemetry LogRecords&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This guide shows you how to put that power to work, from the basics of reading a file to building a production-ready pipeline that handles rotation, recovers from restarts, and never loses a single line. You'll learn how to structure, enrich, and standardize log file entries so they become first-class observability data.&lt;/p&gt;

&lt;p&gt;Let's begin!&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Filelog receiver works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg30rpueploqdrubf9zil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg30rpueploqdrubf9zil.png" alt="An illustration of how filelogreceiver works in OpenTelemetry" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we get into configuration details, it helps to picture how the receiver handles a log file throughout its lifecycle. You can think of it as a simple repeating four-step loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Discover:&lt;/strong&gt; The receiver scans the filesystem at regular intervals, using the &lt;code&gt;include&lt;/code&gt; and &lt;code&gt;exclude&lt;/code&gt; patterns you've set, to figure out which log files it should pay attention to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read:&lt;/strong&gt; Once a file is picked up, the receiver opens it and begins following along as new lines are written. The &lt;code&gt;start_at&lt;/code&gt; setting decides whether it begins from &lt;code&gt;beginning&lt;/code&gt; or just tails new content from the &lt;code&gt;end&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parse:&lt;/strong&gt; Each line (or block of lines, if multiline parsing is used) runs through a series of &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/README.md" rel="noopener noreferrer"&gt;Stanza operators&lt;/a&gt; (if configured). These operators parse the raw text, pull out key attributes, assign timestamps and severity levels, and ultimately structure the log data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Emit:&lt;/strong&gt; Finally, the structured log records are passed into the Collector's pipeline, where they can be &lt;a href="https://www.dash0.com/guides/opentelemetry-filter-processor" rel="noopener noreferrer"&gt;filtered&lt;/a&gt;, transformed further, or exported to your backend.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This &lt;code&gt;Discover -&amp;gt; Read -&amp;gt; Parse -&amp;gt; Emit&lt;/code&gt; loop forms the foundation of everything the receiver does.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start: tailing a log file
&lt;/h2&gt;

&lt;p&gt;One of the most common use cases is when your application is already writing logs in JSON format to a file. For example, imagine you have a service writing JSON logs to &lt;code&gt;/var/log/myapp/app.log&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2025-09-28 20:15:12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"User logged in successfully"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"u-123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"source_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"192.168.1.100"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2025-09-28 20:15:45"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"WARN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Password nearing expiration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"u-123"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a minimal &lt;code&gt;filelog&lt;/code&gt; receiver example to read and ingest such logs into an OpenTelemetry pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 1. DISCOVER all .log files in /var/log/myapp/&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="c1"&gt;# 2. READ from the beginning of new files&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="c1"&gt;# 3. PARSE using the json_parser operator&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;json_parser&lt;/span&gt;
        &lt;span class="c1"&gt;# Tell the parser where to find the timestamp and how it's formatted&lt;/span&gt;
        &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.time&lt;/span&gt;
          &lt;span class="na"&gt;layout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;%H:%M:%S"&lt;/span&gt;
        &lt;span class="c1"&gt;# Tell the parser which field contains the severity&lt;/span&gt;
        &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.level&lt;/span&gt;

&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a breakdown of the above configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;include&lt;/code&gt;: Points the receiver to all &lt;code&gt;.log&lt;/code&gt; files in &lt;code&gt;/var/log/myapp/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;start_at: beginning&lt;/code&gt;: Ensures the receiver processes the entire file the first time it sees it. By default (&lt;code&gt;end&lt;/code&gt;), it would only capture new lines written after the Collector starts.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;operators&lt;/code&gt;: In this case, there's just one: the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/json_parser.md" rel="noopener noreferrer"&gt;json_parser&lt;/a&gt;. Its job is to take each log line, interpret it as JSON, and then promote selected fields into the log record's core metadata.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timestamp&lt;/code&gt; and &lt;code&gt;severity&lt;/code&gt;: Within the &lt;code&gt;json_parser&lt;/code&gt;, we're pulling the &lt;code&gt;time&lt;/code&gt; and &lt;code&gt;level&lt;/code&gt; fields out of the JSON and promoting them to the OpenTelemetry's top-level &lt;code&gt;Timestamp&lt;/code&gt; and &lt;code&gt;Severity*&lt;/code&gt; fields for each log record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.dash0.com/guides/opentelemetry-debug-exporter" rel="noopener noreferrer"&gt;With the debug exporter&lt;/a&gt;, you'll see the parsed and structured output. Instead of just raw JSON, each field is now properly represented inside each log record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LogRecord #0
ObservedTimestamp: 2025-09-28 20:48:36.728437503 +0000 UTC
Timestamp: 2025-09-28 20:15:12 +0000 UTC
SeverityText: INFO
SeverityNumber: Info(9)
Body: Str({"time":"2025-09-28 20:15:12","level":"INFO","message":"User logged in successfully","user_id":"u-123","source_ip":"192.168.1.100"})
Attributes:
     -&amp;gt; user_id: Str(u-123)
     -&amp;gt; source_ip: Str(192.168.1.100)
     -&amp;gt; log.file.name: Str(myapp.log)
     -&amp;gt; time: Str(2025-09-28 20:15:12)
     -&amp;gt; level: Str(INFO)
     -&amp;gt; message: Str(User logged in successfully)
Trace ID:
Span ID:
Flags: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The raw JSON logs have now been converted into OpenTelemetry's unified log data format, ensuring a consistent foundation for cross-system observability.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;log.file.name&lt;/code&gt; attribute is automatically added by the receiver by default, and you can also enable &lt;code&gt;include_file_path&lt;/code&gt; to capture the full file path as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;include_file_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to easily filter or query logs based on their exact source path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attributes:
     -&amp;gt; log.file.path: Str(/var/log/myapp/app.log)
     -&amp;gt; log.file.name: Str(app.log)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more enrichment options in the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/filelogreceiver/README.md" rel="noopener noreferrer"&gt;official OpenTelemetry Filelog receiver documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filtering and managing log files
&lt;/h2&gt;

&lt;p&gt;The most fundamental step in configuring the &lt;code&gt;filelog&lt;/code&gt; receiver is telling it which files to monitor. This is controlled using &lt;code&gt;include&lt;/code&gt; and &lt;code&gt;exclude&lt;/code&gt; glob patterns.&lt;/p&gt;

&lt;p&gt;The receiver first uses &lt;code&gt;include&lt;/code&gt; to generate a list of all potential files, then it applies the &lt;code&gt;exclude&lt;/code&gt; patterns to remove any unwanted files from that list.&lt;/p&gt;

&lt;p&gt;Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/apps/**/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;exclude&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/log/apps/**/debug.log&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/log/apps/**/*.tmp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this scenario, the receiver will collect every &lt;code&gt;.log&lt;/code&gt; file under &lt;code&gt;/var/log/apps/&lt;/code&gt;, including subdirectories, but it will skip any file named &lt;code&gt;debug.log&lt;/code&gt; and any file ending with &lt;code&gt;.tmp&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Excluding files by modification age
&lt;/h3&gt;

&lt;p&gt;If the log directory you're reading contains many existing log files, you can instruct the receiver to ignore files that have not been modified within a given time window with &lt;code&gt;exclude_older_than&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;exclude_older_than&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;24h&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, even if &lt;code&gt;app-2025-07-15.log&lt;/code&gt; matches the pattern, it will be skipped if it hasn't been updated in the past 24 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parsing unstructured text with regular expressions
&lt;/h2&gt;

&lt;p&gt;Most infrastructure logs don't come neatly packaged as JSON. More often, they're plain text strings that follow a loose pattern, such as web server access logs, database query logs, or operating system messages. These logs are human-readable but difficult for machines to analyze until they're given some structure.&lt;/p&gt;

&lt;p&gt;The Collector addresses this with the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/regex_parser.md" rel="noopener noreferrer"&gt;regex_parser operator&lt;/a&gt;. Using regular expressions with named capture groups, you can break a raw log line into meaningful fields and promote them into structured attributes.&lt;/p&gt;

&lt;p&gt;For example, consider an &lt;a href="https://www.dash0.com/guides/nginx-logs" rel="noopener noreferrer"&gt;NGINX access log&lt;/a&gt; in the &lt;a href="https://en.wikipedia.org/wiki/Common_Log_Format" rel="noopener noreferrer"&gt;Common Log Format&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1 - - [28/Sep/2025:20:30:00 +0000] "GET /api/v1/users HTTP/1.1" 200 512
127.0.0.1 - - [28/Sep/2025:20:30:05 +0000] "POST /api/v1/login HTTP/1.1" 401 128
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can configure the &lt;code&gt;regex_parser&lt;/code&gt; like this to parse them into structured attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/nginx/access.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="c1"&gt;# Use named capture groups to extract data&lt;/span&gt;
        &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^(?P&amp;lt;client_ip&amp;gt;[^&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;]+)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;\[(?P&amp;lt;timestamp&amp;gt;[^\]]+)\]&lt;/span&gt;
          &lt;span class="s"&gt;"(?P&amp;lt;http_method&amp;gt;[A-Z]+)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(?P&amp;lt;http_path&amp;gt;[^&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"]+)[^"]*"&lt;/span&gt;
          &lt;span class="s"&gt;(?P&amp;lt;status_code&amp;gt;\d{3})&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(?P&amp;lt;response_size&amp;gt;\d+)$'&lt;/span&gt;
        &lt;span class="c1"&gt;# Parse the extracted timestamp&lt;/span&gt;
        &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.timestamp&lt;/span&gt;
          &lt;span class="na"&gt;layout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%d/%b/%Y:%H:%M:%S&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;%z"&lt;/span&gt;
        &lt;span class="c1"&gt;# Map status codes to severities&lt;/span&gt;
        &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.status_code&lt;/span&gt;
          &lt;span class="na"&gt;mapping&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
                &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;399&lt;/span&gt;
            &lt;span class="na"&gt;warn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4xx&lt;/span&gt;
            &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5xx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The core of this setup is the &lt;code&gt;regex&lt;/code&gt; expression with named capture groups. Each group labels a slice of the line so the parser can turn it into an attribute: &lt;code&gt;client_ip&lt;/code&gt; grabs the remote address, &lt;code&gt;timestamp&lt;/code&gt; captures the bracketed time string, &lt;code&gt;http_method&lt;/code&gt; and &lt;code&gt;http_path&lt;/code&gt; pull the request pieces, &lt;code&gt;status_code&lt;/code&gt; picks up the three-digit response code, and &lt;code&gt;response_size&lt;/code&gt; records the byte count.&lt;/p&gt;

&lt;p&gt;Once those attributes exist, the &lt;code&gt;timestamp&lt;/code&gt; field parses the &lt;code&gt;timestamp&lt;/code&gt; string into a proper datetime value, and the &lt;code&gt;severity&lt;/code&gt; block translates status codes into meaningful severity levels using an explicit &lt;code&gt;mapping&lt;/code&gt;: 2xx and 3xx responses as &lt;code&gt;INFO&lt;/code&gt;, 4xx as &lt;code&gt;WARN&lt;/code&gt;, and 5xx as &lt;code&gt;ERROR&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once access logs are ingested with this configuration, you'll see a structured log record with all the important pieces extracted out as attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LogRecord #0
ObservedTimestamp: 2025-09-28 21:17:42.31729069 +0000 UTC
Timestamp: 2025-09-28 20:30:00 +0000 UTC
SeverityText: 200
SeverityNumber: Info(9)
Body: Str(127.0.0.1 - - [28/Sep/2025:20:30:00 +0000] "GET /api/v1/users HTTP/1.1" 200 512)
Attributes:
     -&amp;gt; status_code: Str(200)
     -&amp;gt; response_size: Str(512)
     -&amp;gt; log.file.name: Str(myapp.log)
     -&amp;gt; client_ip: Str(127.0.0.1)
     -&amp;gt; timestamp: Str(28/Sep/2025:20:30:00 +0000)
     -&amp;gt; http_method: Str(GET)
     -&amp;gt; http_path: Str(/api/v1/users)
Trace ID:
Span ID:
Flags: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a single expression and a couple of parsing steps, a flat NGINX access log is transformed into structured OpenTelemetry data. A natural next step is aligning the captured attributes with the &lt;a href="https://opentelemetry.io/docs/specs/semconv/http/" rel="noopener noreferrer"&gt;HTTP semantic conventions&lt;/a&gt; through the &lt;a href="https://www.dash0.com/guides/opentelemetry-attributes-processor" rel="noopener noreferrer"&gt;attributes processor&lt;/a&gt; or &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/README.md" rel="noopener noreferrer"&gt;transform processor&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling multiple log formats
&lt;/h2&gt;

&lt;p&gt;Log files rarely come in just one flavor. For example, you might be ingesting NGINX logs, database logs, and application logs, each with their own format.&lt;/p&gt;

&lt;p&gt;The cleanest way to handle this is to define a separate &lt;code&gt;filelog&lt;/code&gt; receiver for each file type. Each receiver has its own parsing rules and runs independently, which keeps your setup organized and easy to debug.&lt;/p&gt;

&lt;p&gt;This is the best approach when the log formats are completely different and share nothing in common.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# NGINX access logs&lt;/span&gt;
  &lt;span class="na"&gt;filelog/nginx_access&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/nginx/access.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="c1"&gt;# ... NGINX access log parsing rules&lt;/span&gt;

  &lt;span class="c1"&gt;# NGINX error logs&lt;/span&gt;
  &lt;span class="na"&gt;filelog/nginx_error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/nginx/error.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="c1"&gt;# ... NGINX error log parsing rules&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes, though, variation happens within a single file.&lt;/p&gt;

&lt;p&gt;Maybe most lines are simple messages, but others add extra fields like a &lt;code&gt;trace_id&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO: Application started successfully.
DEBUG: Processing request for trace_id=12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of writing one massive regex to cover every case, you can use conditional operators with &lt;code&gt;if&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/app.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Parse the basic structure of every line&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;base_parser&lt;/span&gt; &lt;span class="c1"&gt;# a unique ID is required when multiple operators of the same type is being used&lt;/span&gt;
        &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^(?P&amp;lt;severity&amp;gt;\w+):&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(?P&amp;lt;message&amp;gt;.*)$'&lt;/span&gt;

      &lt;span class="c1"&gt;# Only run this parser when "trace_id" appears&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trace_parser&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;attributes["message"]&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;matches&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"trace_id"'&lt;/span&gt;
        &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.message&lt;/span&gt;
        &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.*trace_id=(?P&amp;lt;trace_id&amp;gt;\w+).*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first parser runs on every log line and extracts &lt;code&gt;severity&lt;/code&gt; and &lt;code&gt;message&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The second parser runs only when the message contains &lt;code&gt;trace_id&lt;/code&gt;, enriching the log with that extra field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining these two approaches, multiple receivers for unrelated formats and conditional parsing for minor variations, you can handle almost any kind of log your systems produce without creating unreadable or brittle configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling stack traces and multiline logs
&lt;/h2&gt;

&lt;p&gt;Not all log entries fit neatly on a single line. A stack trace is a classic example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-09-28 21:05:42 [ERROR] Unhandled exception: Cannot read property 'foo' of undefined
TypeError: Cannot read property 'foo' of undefined
    at Object.&amp;lt;anonymous&amp;gt; (/usr/src/app/index.js:15:18)
    at Module._compile (node:internal/modules/cjs/loader:1254:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1308:10)
    at Module.load (node:internal/modules/cjs/loader:1117:32)
    at Module._load (node:internal/modules/cjs/loader:958:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:17:47
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you send this directly to the Collector, the filelog receiver will treat each line as a separate log record. That's not what you want, since the error message and every stack frame belong together.&lt;/p&gt;

&lt;p&gt;The fix is to use the &lt;code&gt;multiline&lt;/code&gt; configuration, which tells the receiver how to group lines into a single entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;

    &lt;span class="na"&gt;multiline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# New entry starts when a line begins with "YYYY-MM-DD HH:MM:SS"&lt;/span&gt;
      &lt;span class="na"&gt;line_start_pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;^\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}&lt;/span&gt;

    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(?P&amp;lt;timestamp&amp;gt;\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})\s+\[(?P&amp;lt;severity&amp;gt;[A-Za-z]+)\]\s+(?P&amp;lt;message&amp;gt;.+)&lt;/span&gt;

        &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.timestamp&lt;/span&gt;
          &lt;span class="na"&gt;layout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;%H:%M:%S"&lt;/span&gt;

        &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;parse_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;attributes.severity&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the &lt;code&gt;line_start_pattern&lt;/code&gt; acts as the anchor. A new log entry begins only when a line starts with a date in the form &lt;code&gt;YYYY-MM-DD HH:MM:SS&lt;/code&gt;, and any line that doesn't match is appended to the previous one.&lt;/p&gt;

&lt;p&gt;The result is that the entire stack trace, from the error message down through each &lt;code&gt;at ...&lt;/code&gt; frame, gets captured as one structured log record. This preserves full context, making it far easier to analyze and troubleshoot errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LogRecord #0
ObservedTimestamp: 2025-10-07 12:04:26.963143642 +0000 UTC
Timestamp: 2025-09-28 21:05:42 +0000 UTC
SeverityText: ERROR
SeverityNumber: Error(17)
Body: Str(2025-09-28 21:05:42 [ERROR] Unhandled exception: Cannot read property 'foo' of undefined
TypeError: Cannot read property 'foo' of undefined
    at Object.&amp;lt;anonymous&amp;gt; (/usr/src/app/index.js:15:18)
    at Module._compile (node:internal/modules/cjs/loader:1254:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1308:10)
    at Module.load (node:internal/modules/cjs/loader:1117:32)
    at Module._load (node:internal/modules/cjs/loader:958:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:17:47)
Attributes:
     -&amp;gt; log.file.name: Str(/var/log/myapp/app.log)
     -&amp;gt; message: Str(Unhandled exception: Cannot read property 'foo' of undefined)
     -&amp;gt; timestamp: Str(2025-09-28 21:05:42)
     -&amp;gt; severity: Str(ERROR)
Trace ID:
Span ID:
Flags: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Parsing metadata from file headers
&lt;/h2&gt;

&lt;p&gt;Some log files don't just contain log entries. They begin with a header section that holds important metadata about the entire file. Without that context, the individual log lines can be hard to interpret.&lt;/p&gt;

&lt;p&gt;This pattern is common with batch jobs and export processes. For example, a nightly billing run might write a fresh log file for each execution. At the top of that file you might see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Job-ID: job-d8e8fca2
# Job-Type: nightly-billing-run
# Executed-By: scheduler-prod-1
# Records-To-Process: 1500
2025-10-08T08:20:00Z INFO: Starting billing run.
2025-10-08T08:21:15Z INFO: Processed account #1.
2025-10-08T08:21:16Z WARN: Account #2 has a negative balance.
. . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those first lines tell you exactly which job produced the logs that follow. If you ignore them, you lose that crucial context. The &lt;code&gt;header&lt;/code&gt; feature solves this by parsing metadata from the top of the file and stamping it onto every subsequent log record.&lt;/p&gt;

&lt;p&gt;It defines a small, dedicated pipeline that runs only on the initial block of lines. You need to specify a regex to match which lines belong to the header. The &lt;code&gt;metadata_operators&lt;/code&gt; then parse those lines into attributes which are automatically added to every log entry that follows.&lt;/p&gt;

&lt;p&gt;To use this feature, you need to do three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Enable the &lt;code&gt;filelog.allowHeaderMetadataParsing&lt;/code&gt; feature gate:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelcol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="nv"&gt;--config=/etc/otelcol-contrib/config.yaml&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="nv"&gt;--feature-gates=filelog.allowHeaderMetadataParsing&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Set &lt;code&gt;start_at: beginning&lt;/code&gt; since the header has to be read from the top.&lt;/li&gt;
&lt;li&gt; Configure both the &lt;code&gt;header&lt;/code&gt; rules and the main &lt;code&gt;operators&lt;/code&gt; pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's the configuration to parse the headers in the sample log file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/jobs/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt; &lt;span class="c1"&gt;# required&lt;/span&gt;
    &lt;span class="na"&gt;header&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;^#&lt;/span&gt;
      &lt;span class="na"&gt;metadata_operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;key_value_parser&lt;/span&gt;
          &lt;span class="na"&gt;delimiter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
          &lt;span class="na"&gt;pair_delimiter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what's happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;pattern: ^#&lt;/code&gt; says that any line starting with &lt;code&gt;#&lt;/code&gt; belongs to the header. Those header lines are then passed through the pipeline of &lt;code&gt;metadata_operators&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/key_value_parser.md" rel="noopener noreferrer"&gt;key_value_parser&lt;/a&gt; operator splits each header line into a key and value using &lt;code&gt;:&lt;/code&gt; as the separator, while &lt;code&gt;#&lt;/code&gt; denotes the beginning of a new key/value pair.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These results in the following attributes on every log entry that follows in that file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attributes:
     -&amp;gt; Job-ID: Str(job-d8e8fca2)
     -&amp;gt; Job-Type: Str(nightly-billing-run)
     -&amp;gt; Executed-By: Str(scheduler-prod-1)
     -&amp;gt; Records-To-Process: Str(1500)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the &lt;code&gt;Job-ID&lt;/code&gt; and other header fields are now attached to the log record, providing invaluable context that would have otherwise been lost.&lt;/p&gt;

&lt;p&gt;From here, you can process them further by promoting the header fields to &lt;a href="https://www.dash0.com/knowledge/what-are-opentelemetry-resources" rel="noopener noreferrer"&gt;resource attributes&lt;/a&gt; and aligning with &lt;a href="https://www.dash0.com/knowledge/otel-semantic-conventions-explainer" rel="noopener noreferrer"&gt;OpenTelemetry semantic conventions&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to avoid lost or duplicate logs
&lt;/h2&gt;

&lt;p&gt;When the Collector restarts, log ingestion can easily go wrong if state is not preserved as you risk either re-ingesting old data or skipping over new logs. If you use &lt;code&gt;start_at: beginning&lt;/code&gt;, the receiver will reread all your log files and create massive duplication. With &lt;code&gt;start_at: end&lt;/code&gt;, you might miss any entries written while the Collector was down.&lt;/p&gt;

&lt;p&gt;The way to solve this is with &lt;strong&gt;checkpointing&lt;/strong&gt;. By configuring a storage extension, you instruct the &lt;code&gt;filelog&lt;/code&gt; receiver to save its position in each file (the last read offset) to disk and pick up exactly where it left off.&lt;/p&gt;

&lt;p&gt;A conventional approach is using the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/storage/filestorage" rel="noopener noreferrer"&gt;file_storage extension&lt;/a&gt; for this purpose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;extensions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;file_storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/otelcol/storage&lt;/span&gt;

&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="c1"&gt;# Link the receiver to the storage extension&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;file_storage&lt;/span&gt;

&lt;span class="c1"&gt;# ... processors, exporters&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# The extension must be enabled in the service section&lt;/span&gt;
  &lt;span class="na"&gt;extensions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;file_storage&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;storage&lt;/code&gt; extension enabled, the receiver will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; On startup, check the &lt;code&gt;/var/otelcol/storage&lt;/code&gt; directory for saved offsets.&lt;/li&gt;
&lt;li&gt; Resume reading from the saved offset for any file it was tracking, ensuring no data is lost or duplicated.&lt;/li&gt;
&lt;li&gt; Periodically update the storage with its latest progress.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Checkpointing ensures that log collection is resilient to restarts, upgrades, and even crashes. It is a critical best practice for reliable log ingestion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling log delivery failures gracefully
&lt;/h3&gt;

&lt;p&gt;Checkpointing with a storage extension protects you during Collector restarts, but another common failure mode is when the receiver reads a batch successfully but fails to hand it off to the next stage.&lt;/p&gt;

&lt;p&gt;This can happen if an exporter can't reach its endpoint, or the &lt;a href="https://www.dash0.com/guides/opentelemetry-memory-limiter-processor" rel="noopener noreferrer"&gt;memory limiter&lt;/a&gt; is refusing data. By default, the receiver will drop that batch of logs and move on to the next, causing silent data loss.&lt;/p&gt;

&lt;p&gt;To prevent this, the receiver has a built-in mechanism to retry sending failed batches. When &lt;code&gt;retry_on_failure&lt;/code&gt; is enabled, the receiver will pause, wait for a configured interval, and attempt to resend the exact same batch of logs. This process repeats with an &lt;a href="https://en.wikipedia.org/wiki/Exponential_backoff" rel="noopener noreferrer"&gt;exponential backoff&lt;/a&gt; until the batch is sent successfully or the &lt;code&gt;max_elapsed_time&lt;/code&gt; is reached:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;retry_on_failure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="c1"&gt;# Wait 5 seconds after the first failure before the first retry.&lt;/span&gt;
      &lt;span class="na"&gt;initial_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
      &lt;span class="c1"&gt;# The longest the receiver will wait between retries is 30 seconds.&lt;/span&gt;
      &lt;span class="na"&gt;max_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="c1"&gt;# Give up trying to send a batch after 10 minutes.&lt;/span&gt;
      &lt;span class="na"&gt;max_elapsed_time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By combining checkpointing with a robust retry policy , you'll create a highly resilient log file ingestion pipeline that can withstand both Collector restarts and temporary downstream outages or throttling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deleting log files after processing
&lt;/h3&gt;

&lt;p&gt;Some workflows call for processing a file once and then removing it to save space and avoid reprocessing. You can enable this with &lt;code&gt;delete_after_read&lt;/code&gt;, which requires &lt;code&gt;start_at: beginning&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/archives/*.gz&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="na"&gt;delete_after_read&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You must need also enable the &lt;code&gt;filelog.allowFileDeletion&lt;/code&gt; feature gate for this to work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelcol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="nv"&gt;--config=/etc/otelcol-contrib/config.yaml&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="nv"&gt;--feature-gates=filelog.allowFileDeletion&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, ensure that the files are configured to be deletable and that the Collector service has enough permissions to delete the file. If permissions are insufficient, you will see a "could not delete" log record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-10-08T06:42:03.973Z        error   reader/reader.go:278    could not delete        {"resource": {"service.instance.id": "7c0daf0e-e625-4da8-9577-072606dce057", "service.name": "otelcol-contrib", "service.version": "0.136.0"}, "otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "component": "fileconsumer", "path": "/var/log/myapp/app.log", "filename": "/var/log/myapp/app.log"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just be careful when enabling this setting as it deletes the files from disk permanently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling log rotation seamlessly
&lt;/h2&gt;

&lt;p&gt;Log files don't grow indefinitely. &lt;a href="https://www.dash0.com/guides/log-rotation-linux-logrotate" rel="noopener noreferrer"&gt;Eventually, they'll get rotated&lt;/a&gt; (or at least they should). The &lt;code&gt;filelog&lt;/code&gt; receiver is built to handle common rotation patterns, such as &lt;code&gt;app.log&lt;/code&gt; to &lt;code&gt;app.log.1&lt;/code&gt; automatically and without losing data.&lt;/p&gt;

&lt;p&gt;Instead of relying on filenames alone, the receiver tracks each file using a unique fingerprint derived from the first few kilobytes of content. When rotation occurs, it recognizes that the original file has been renamed, finishes reading it, and then starts fresh from the beginning of the new &lt;code&gt;app.log&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This behavior requires no additional configuration; it works out of the box, giving you reliable log ingestion even in environments with frequent rotations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading compressed files
&lt;/h3&gt;

&lt;p&gt;Many log rotation tools compress old logs to save disk space, producing files like &lt;code&gt;access.log.1.gz&lt;/code&gt;. The &lt;code&gt;filelog&lt;/code&gt; receiver can handle these seamlessly by decompressing them on the fly.&lt;/p&gt;

&lt;p&gt;To make this work, you use the &lt;code&gt;compression&lt;/code&gt; setting. This tells the receiver that some or all of the files it discovers may be compressed and need to be decompressed before parsing.&lt;/p&gt;

&lt;p&gt;You have two main choices for the &lt;code&gt;compression&lt;/code&gt; setting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;gzip&lt;/code&gt;: Treats all matched files as gzip-compressed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;auto&lt;/code&gt;: Automatically detects compression based on file extension (currently &lt;code&gt;.gz&lt;/code&gt;). This is the best option when a directory contains a mix of active, uncompressed logs and older, compressed ones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if your directory has both &lt;code&gt;app.log&lt;/code&gt; (active) and &lt;code&gt;app.log.1.gz&lt;/code&gt; (rotated and compressed), you can configure the receiver like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/myapp/*&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="c1"&gt;# Automatically detect and decompress .gz files&lt;/span&gt;
    &lt;span class="na"&gt;compression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auto&lt;/span&gt;
    &lt;span class="na"&gt;operators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_parser&lt;/span&gt;
        &lt;span class="c1"&gt;# ... your parsing rules&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When working with compressed logs, there are two main things to keep in mind.&lt;/p&gt;

&lt;p&gt;First, the receiver assumes that compressed files can only grow by appending new data. If a file is completely rewritten, for example by taking the original content and recompressing it together with new lines, the receiver may not handle it correctly.&lt;/p&gt;

&lt;p&gt;Second, there's the question of fingerprinting. By default, the receiver identifies files based on their compressed bytes. This works fine in most cases, but if files are renamed or moved it can cause confusion. To make identification more reliable, you can enable the &lt;code&gt;filelog.decompressFingerprint&lt;/code&gt; feature gate. With this enabled, the fingerprint is calculated from the decompressed content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelcol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="nv"&gt;--config=/etc/otelcol-contrib/config.yaml&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="nv"&gt;--feature-gates=filelog.decompressFingerprint&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One caution: if you turn this feature on in an existing setup, the fingerprints will change. That means compressed files that were already read may be ingested again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance tuning for high-volume environments
&lt;/h2&gt;

&lt;p&gt;The OTel collector &lt;code&gt;filelog&lt;/code&gt; receiver's default settings are optimized for general use, but in production environments with hundreds of log files or very high throughput, you'll likely need to tune its performance.&lt;/p&gt;

&lt;p&gt;By default, the receiver tries to read from every matched file at once. On a system producing thousands of files, this can hog the CPU and quickly hit file handle limits.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;max_concurrent_files&lt;/code&gt; setting puts a cap on how many files are read at the same time. The default is &lt;code&gt;1024&lt;/code&gt;, but lowering this can keep your system from getting overwhelmed.&lt;/p&gt;

&lt;p&gt;Another key setting is &lt;code&gt;poll_interval&lt;/code&gt;, which controls how often the receiver checks for new files and new log lines. The default is 200ms which means logs show up almost immediately, but CPU use goes up because the filesystem is scanned more often.&lt;/p&gt;

&lt;p&gt;For less critical logs or resource-constrained environments, bumping this to &lt;code&gt;1s&lt;/code&gt; or even 5s can be a good trade-off as it'll reduce the polling overhead with only a negligible impact on observability for most use cases.&lt;/p&gt;

&lt;p&gt;Finally, unusually large log entries are guarded against through the &lt;code&gt;max_log_size&lt;/code&gt; setting. It defines the largest allowed log entry, so that anything bigger gets truncated. The default is 1MiB, which is a sensible default for most workloads.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog/k8s_pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/pods/*/*/*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;max_concurrent_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
    &lt;span class="na"&gt;poll_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
    &lt;span class="na"&gt;max_log_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2MiB&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enforcing log file order
&lt;/h2&gt;

&lt;p&gt;Most of the time, the order in which log files are ingested doesn't matter. But some systems produce logs as a series of sequential files where processing order is critical.&lt;/p&gt;

&lt;p&gt;By default, the &lt;code&gt;filelog&lt;/code&gt; receiver reads all matching files concurrently, which means you could end up processing them out of sequence. The &lt;code&gt;ordering_criteria&lt;/code&gt; setting solves this by enforcing a strict order when reading files.&lt;/p&gt;

&lt;p&gt;For example, given a set of log files with the following conventions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;batch-run-001.log
batch-run-002.log
batch-run-003.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog/batch_logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/log/batch-runs/batch-run-*.log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
    &lt;span class="na"&gt;ordering_criteria&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;top_n&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="c1"&gt;# Extract the sequence number from the filename&lt;/span&gt;
      &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch-run-(?P&amp;lt;seq_num&amp;gt;\d+)\.log&lt;/span&gt;
      &lt;span class="c1"&gt;# Sort files by the sequence number as a number, not a string&lt;/span&gt;
      &lt;span class="na"&gt;sort_by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;regex_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;seq_num&lt;/span&gt;
          &lt;span class="na"&gt;sort_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;numeric&lt;/span&gt;
          &lt;span class="na"&gt;ascending&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, the receiver will discover all files matching &lt;code&gt;batch-run-*.log&lt;/code&gt;, extract the sequence number from each filename, and sort the files numerically in ascending order by that sequence.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;top_n&lt;/code&gt; property determines how many files will be tracked after applying the ordering criteria. With &lt;code&gt;top_n: 1&lt;/code&gt;, only the first file (&lt;code&gt;batch-run-001.log&lt;/code&gt;) will be tracked and ingested into the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filelog receiver tips and best practices
&lt;/h2&gt;

&lt;p&gt;When troubleshooting the &lt;code&gt;filelog&lt;/code&gt; receiver, a few issues come up again and again. Here's how to diagnose and fix them quickly:&lt;/p&gt;

&lt;h3&gt;
  
  
  Log files are not being watched
&lt;/h3&gt;

&lt;p&gt;When the Collector starts watching a file for log entries, you'll see a log like this in its output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-10-09T08:47:05.574Z        info    fileconsumer/file.go:261        Started watching file   {...}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't see this message, or if you see the log below, it means that the receiver hasn't picked up any files yet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-10-09T09:25:20.280Z        warn    fileconsumer/file.go:49 finding files   {..., "error": "no files match the configured criteria"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start by double-checking your &lt;code&gt;include&lt;/code&gt;, &lt;code&gt;exclude&lt;/code&gt;, and &lt;code&gt;exclude_older_than&lt;/code&gt; settings to make sure your file patterns actually match the files you expect.&lt;/p&gt;

&lt;p&gt;Next, verify that the Collector process has permission to access both the files and their parent directories. Missing directory-level permissions are one of the most common reasons files aren't discovered or watched.&lt;/p&gt;

&lt;h3&gt;
  
  
  Files are watched but no log lines are read
&lt;/h3&gt;

&lt;p&gt;If you can see "Started watching file" messages but no logs are being collected, the most common cause is the &lt;code&gt;start_at&lt;/code&gt; setting. By default, it's set to &lt;code&gt;end&lt;/code&gt;, which tells the receiver to start reading only new lines appended after the Collector starts.&lt;/p&gt;

&lt;p&gt;When you're testing with an existing file that isn't actively being written to, this means nothing will appear. To read the entire file from the start, set &lt;code&gt;start_at&lt;/code&gt; to &lt;code&gt;beginning&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filelog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;start_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;beginning&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the receiver processes all existing content the first time the file is discovered.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regular expression doesn't match log lines
&lt;/h3&gt;

&lt;p&gt;If your logs aren't being parsed correctly, the issue is usually with your regular expression. When this happens, the Collector often logs an error like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-10-09T09:32:14.949Z        error   helper/transformer.go:154       Failed to process entry {"resource": {"service.instance.id": "f8ec2efd-16e9-44ad-9ed2-9f406e46719f", "service.name": "otelcol-contrib", "service.version": "0.136.0"}, "otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "operator_id": "regex_parser", "operator_type": "regex_parser", "error": "regex pattern does not match", "action": "send", "entry.timestamp": "0001-01-01T00:00:00.000Z", "log.file.name": "batch-run-001.log"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before adjusting your Collector config, test the regex outside of it using a tool like &lt;a href="https://regex101.com/" rel="noopener noreferrer"&gt;Regex101&lt;/a&gt;. Make sure to select the &lt;strong&gt;Golang&lt;/strong&gt; flavor so it behaves the same way as the Collector's regex engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmppgv21pumkd0xnc46h8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmppgv21pumkd0xnc46h8.png" alt="Regex101 being used to test OpenTelemetry Collector regular expression in Golang flavor" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're not seeing this error but your regex still isn't working, check whether the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/types/on_error.md" rel="noopener noreferrer"&gt;&lt;code&gt;on_error&lt;/code&gt; parameter&lt;/a&gt; is set to one of the &lt;code&gt;_quiet&lt;/code&gt; modes. Those values suppress operator errors unless the Collector log level is set to &lt;code&gt;DEBUG&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Common causes of regex mismatches include invisible spaces or tabs, missing anchors (&lt;code&gt;^&lt;/code&gt; or &lt;code&gt;$&lt;/code&gt;), incorrect escaping, or small format differences between your log and your pattern. Double-check these details before investigating further.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logs are duplicated after restart
&lt;/h3&gt;

&lt;p&gt;If you notice duplicate logs appearing after the Collector restarts, it usually means the receiver isn't remembering where it left off. To fix this, enable a &lt;code&gt;storage&lt;/code&gt; extension so the &lt;code&gt;filelog&lt;/code&gt; receiver can checkpoint its position in each file.&lt;/p&gt;

&lt;p&gt;This allows the receiver to resume reading exactly where it stopped, preventing both data loss and duplication. Without it, the receiver will reread entire files from the start after every restart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;filelog&lt;/code&gt; receiver in OpenTelemetry is an essential bridge between traditional file-based logging (often with unstructured data) and the world of modern, structured observability.&lt;/p&gt;

&lt;p&gt;By mastering its core concepts of discovery, parsing with operators, and checkpointing, you can build a reliable log ingestion pipeline for any service that writes its logs to a file.&lt;/p&gt;

&lt;p&gt;Once you've transformed your raw text logs into well-structured OpenTelemetry data, the full observability ecosystem opens up. You can enrich, filter, and route them to any backend that speaks &lt;a href="https://www.dash0.com/knowledge/opentelemetry-protocol-otlp" rel="noopener noreferrer"&gt;OTLP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6og47d2te9vq6goemzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6og47d2te9vq6goemzj.png" alt="Sending log data to Dash0" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a faster path from collecting telemetry to insight, consider using &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt;, an observability platform purpose-built for OpenTelemetry data. &lt;a href="https://www.dash0.com/sign-up" rel="noopener noreferrer"&gt;Try it out today with a free 14-day trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opentelemetry</category>
      <category>logging</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Leveling Up Your Python Logs with Structlog</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Thu, 27 Nov 2025 13:08:38 +0000</pubDate>
      <link>https://forem.com/dash0/leveling-up-your-python-logs-with-structlog-3a5a</link>
      <guid>https://forem.com/dash0/leveling-up-your-python-logs-with-structlog-3a5a</guid>
      <description>&lt;p&gt;&lt;a href="https://www.dash0.com/guides/logging-in-python" rel="noopener noreferrer"&gt;Python's standard logging module is capable&lt;/a&gt;, but shaping it into a system that produces structured, contextual, and queryable logs requires understanding a lot of concepts: hierarchical logging, formatters, filters, handlers, and configuration files. It can be done, but it often feels like you are building infrastructure instead of writing your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.structlog.org/" rel="noopener noreferrer"&gt;Structlog&lt;/a&gt; takes a different approach. Rather than wrestling with object hierarchies, you simply declare how each event should be processed and enriched. The result is logging that feels natural to write, while producing output that works just as well for humans skimming a console as it does for machines ingesting JSON into an observability platform.&lt;/p&gt;

&lt;p&gt;This guide takes a practical look at using Structlog as the foundation for a production-grade logging system. We will cover configuration, contextual data, structured exception handling, and integration with tracing via &lt;a href="https://www.dash0.com/knowledge/what-is-opentelemetry" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt;. By the end, you'll have the patterns you need to turn your application from an opaque box into one that is transparent and easy to understand.&lt;/p&gt;

&lt;p&gt;Let's begin!&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Structlog philosophy
&lt;/h2&gt;

&lt;p&gt;The standard library's &lt;code&gt;logging&lt;/code&gt; module is built around a small network of objects. You create a &lt;code&gt;Logger&lt;/code&gt;, attach one or more &lt;code&gt;Handler&lt;/code&gt; instances, give each handler a &lt;code&gt;Formatter&lt;/code&gt;, and sometimes add &lt;code&gt;Filters&lt;/code&gt;. A &lt;code&gt;LogRecord&lt;/code&gt; is created and handed off to that graph. It works, and it is flexible, but it can be hard to follow.&lt;/p&gt;

&lt;p&gt;Structlog takes a simpler path where each log event moves through a clear, linear chain of functions called &lt;strong&gt;processors&lt;/strong&gt;. When your code calls something like &lt;code&gt;logger.info("User logged in", user_id="usr_123")&lt;/code&gt;, &lt;code&gt;structlog&lt;/code&gt; immediately builds a mutable dictionary for that event that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"event"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User logged in"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr_123"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That dictionary is then passed to each registered processor in order. A processor is just a function that gets three arguments: the logger, the method name, and the event dictionary. It can read the dictionary, add keys, remove keys, or tweak values.&lt;/p&gt;

&lt;p&gt;The last processor is the renderer. Its job is to turn the final dictionary into a string and write it to your chosen destination, such as the console, a file, or a socket.&lt;/p&gt;

&lt;p&gt;This declarative model is incredibly powerful because it provides a single source of truth. You can look at your list of processors and know &lt;em&gt;exactly&lt;/em&gt; how a log entry is built, step by step. There is no hidden state or complex object interaction. It is a clean, predictable, and easily debuggable flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examining the default configuration
&lt;/h2&gt;

&lt;p&gt;Before diving into custom setups, it is helpful to see what Structlog does out of the box. The library ships with a default configuration that produces development-friendly logs without requiring you to write any setup code.&lt;/p&gt;

&lt;p&gt;First, install the library if you haven't already:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;structlog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now try the simplest possible logger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User profile updated&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usr_f4b7a1c2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;req_9e8d5c3a-7b1f-4a8e-9c6d-0e2f1a3b4c5d&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;updated_fields&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;last_login&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;duration_ms&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;54.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run this, you should see output along the lines of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-09-05 18:13:33 [info     ] User profile updated           [__main__] duration_ms=54.3 request_id=req_9e8d5c3a-7b1f-4a8e-9c6d-0e2f1a3b4c5d status=success updated_fields=['email', 'last_login'] user_id=usr_f4b7a1c2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What you're seeing is a nicely formatted and colorized log line with a timestamp, the log level (info), and the message text, and finally the included key/value pairs.&lt;/p&gt;

&lt;p&gt;Behind the scenes, Structlog is quietly applying a handful of processors that enrich and format each event before it gets written out. Here's the default configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contextvars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;merge_contextvars&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add_log_level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StackInfoRenderer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_exc_info&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;TimeStamper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%d %H:%M:%S&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ConsoleRenderer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;wrapper_class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;make_filtering_bound_logger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NOTSET&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;context_class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;logger_factory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;PrintLoggerFactory&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;cache_logger_on_first_use&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the log event passes through the &lt;code&gt;processors&lt;/code&gt; in sequential order. Each one adds a small piece of structure: merging in any contextual values you've set elsewhere, attaching the log level, making sure exceptions or stack traces are displayed neatly when they occur, and including a timestamp.&lt;/p&gt;

&lt;p&gt;The very last step is the &lt;code&gt;ConsoleRenderer()&lt;/code&gt;, which takes the fully enriched event dictionary and turns it into the formatted, colorized line you see in your terminal.&lt;/p&gt;

&lt;p&gt;The other default behaviors of the Structlog logger are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;wrapper_class&lt;/code&gt;: This wrapper gives you a logger that can filter messages by log level. With &lt;code&gt;NOTSET&lt;/code&gt;, nothing is filtered out, so every message goes through. In practice, you'd raise this to &lt;code&gt;INFO&lt;/code&gt; or higher in production to reduce noise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;context_class&lt;/code&gt;: Structlog needs somewhere to store event data as it flows through the pipeline. By default, it uses a plain Python dictionary to keep things simple and predictable, but you could swap in something else (like &lt;code&gt;OrderedDict&lt;/code&gt;) if you want ordered keys or a custom data structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;logger_factory&lt;/code&gt;: This determines where your logs are sent, which is &lt;code&gt;sys.stdout&lt;/code&gt; by default. You can switch it to the standard error with &lt;code&gt;structlog.PrintLoggerFactory(sys.stderr)&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cache_logger_on_first_use&lt;/code&gt;: By default, Structlog doesn't cache loggers. That means every call to &lt;code&gt;get_logger()&lt;/code&gt; creates a new one, which ensures that if you change the configuration at runtime, the new settings are applied immediately. If performance is critical, you can enable caching for a small speed boost.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put together, these defaults make Structlog easy to experiment with: nothing gets filtered, logs print straight to your terminal, and you can reconfigure on the fly without restarting your process. It's a developer-friendly setup that you'll want to tighten up before going to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  The production configuration: machine-readable JSON
&lt;/h2&gt;

&lt;p&gt;In a production environment, the requirements are different. Logs are not primarily for humans to read in real-time; they are for machines to ingest, parse, index, and query. The industry standard for this is &lt;strong&gt;JSON&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Our production configuration will be similar to the default, but with a few changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;configure_structlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contextvars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;merge_contextvars&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add_log_level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StackInfoRenderer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_exc_info&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dict_tracebacks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;TimeStamper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iso&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;JSONRenderer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# must be the last one
&lt;/span&gt;        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;wrapper_class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;make_filtering_bound_logger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;context_class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;logger_factory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;PrintLoggerFactory&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;cache_logger_on_first_use&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The changes here include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Renderer&lt;/strong&gt; (&lt;code&gt;ConsoleRenderer&lt;/code&gt; → &lt;code&gt;JSONRenderer&lt;/code&gt;): Instead of a pretty, colorized line for humans, each event becomes a single JSON object so that log shippers and observability platforms can ingest it without guessing at formats or using brittle regular expressions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timestamp&lt;/strong&gt; (&lt;code&gt;fmt="iso"&lt;/code&gt;): Timestamps use &lt;a href="https://en.wikipedia.org/wiki/ISO_8601" rel="noopener noreferrer"&gt;ISO 8601 format&lt;/a&gt; in UTC, which avoids timezone confusion and preserves correct lexicographical ordering, especially across regions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dict tracebacks&lt;/strong&gt; (&lt;code&gt;dict_tracebacks&lt;/code&gt;): Exceptions are serialized into structured dictionaries instead of raw text. This makes stack traces machine-readable, so that observability tools can display them cleanly, and you can query or filter logs by exception type or message.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configurable log level&lt;/strong&gt;: The log level is now passed in as an argument, allowing you to control log verbosity in production without changing code, typically by reading an environment variable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching&lt;/strong&gt;: In production, you rarely hot-reload logging configuration, so caching gives a small performance boost by avoiding repeated wrapper setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, you can dynamically choose your configuration at startup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;

&lt;span class="c1"&gt;# ... [logging configuration]
&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;APP_ENV&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;production&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LOG_LEVEL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;configure_structlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# ... rest of your application logic
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern allows you to retain Structlog's development-friendly defaults, while switching to a production-ready JSON configuration automatically when the environment demands it.&lt;/p&gt;

&lt;p&gt;Note that while Structlog operates independently, it adopts the same level names and numeric values as the standard library. For convenience and clarity, we use the constants from the &lt;code&gt;logging&lt;/code&gt; module (like &lt;code&gt;logging.INFO)&lt;/code&gt; to set these levels.&lt;/p&gt;

&lt;p&gt;Assuming you set &lt;code&gt;APP_ENV=production&lt;/code&gt; in your environment, you'll see the following JSON output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr_f4b7a1c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"req_9e8d5c3a-7b1f-4a8e-9c6d-0e2f1a3b4c5d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"updated_fields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"last_login"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"duration_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;54.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"success"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"event"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User profile updated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T07:40:44.956022Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The log message is placed in an &lt;code&gt;event&lt;/code&gt; key, but you can rename it to &lt;code&gt;msg&lt;/code&gt; by using the &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.processors.EventRenamer" rel="noopener noreferrer"&gt;EventRenamer() processor&lt;/a&gt; as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="c1"&gt;# [...]
&lt;/span&gt;    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;EventRenamer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;msg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;JSONRenderer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;event&lt;/code&gt; key will be renamed to &lt;code&gt;msg&lt;/code&gt; accordingly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T07:46:58.238599Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User profile updated"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The examples in the remainder of this article will assume that you're using the production configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  How log levels work in Structlog
&lt;/h2&gt;

&lt;p&gt;Structlog keeps the same log levels you may already know from Python's &lt;a href="https://www.dash0.com/guides/logging-in-python#controlling-the-signal-to-noise-ratio-with-log-levels" rel="noopener noreferrer"&gt;standard logging module&lt;/a&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Numeric Value&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;NOTSET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Special: disables log-level filtering&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEBUG&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Detailed diagnostic information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;INFO&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Normal application events&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;WARNING&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;td&gt;Potential problems&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ERROR&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;Failed operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CRITICAL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;Severe failures&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each level besides &lt;code&gt;NOTSET&lt;/code&gt; has a corresponding method on the &lt;code&gt;logger&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a debug message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;an info message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a warning message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a error message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;critical&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a critical message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're working in an async context, Structlog also provides async variants which are prefixed with &lt;code&gt;a&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;f&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ainfo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;async info message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;f&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you've already seen, the level threshold is controlled by the &lt;code&gt;wrapper_class&lt;/code&gt; argument to &lt;code&gt;structlog.configure()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;wrapper_class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;make_filtering_bound_logger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The argument to &lt;code&gt;make_filtering_bound_logger()&lt;/code&gt; could be a simple string (like &lt;code&gt;"INFO"&lt;/code&gt;) or the constants on the &lt;code&gt;logging&lt;/code&gt; module (such as &lt;code&gt;logging.INFO&lt;/code&gt;). Level-based filtering is done early &lt;em&gt;before&lt;/em&gt; the event dictionary is created to avoid doing unnecessary work for a message that will ultimately be discarded.&lt;/p&gt;

&lt;p&gt;Structlog also makes the log level explicit inside the event dictionary itself. This happens thanks to the &lt;code&gt;add_log_level&lt;/code&gt; processor, which is included in the default configuration.&lt;/p&gt;

&lt;p&gt;Downstream processors (like the renderer) then use that field to decide how the log line should appear---whether that's a &lt;a href="https://www.structlog.org/en/stable/console-output.html" rel="noopener noreferrer"&gt;colorized console message&lt;/a&gt; in development or a structured JSON object in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4j9vnc08a8iqo6z9az4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4j9vnc08a8iqo6z9az4.png" alt="Colorized log messages are rendered in development environments" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Filtering and dropping events
&lt;/h2&gt;

&lt;p&gt;In some cases, log level filtering isn't enough. You may want to drop or modify logs based on their content---for example, to exclude noisy health checks or to mask sensitive fields. You can do this with a custom processor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;filter_logs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/health&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DropEvent&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;

&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;filter_logs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;# [...]
&lt;/span&gt;    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a processor raises &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.DropEvent" rel="noopener noreferrer"&gt;structlog.DropEvent&lt;/a&gt;, the event is discarded and no log line is emitted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filtering by call site information
&lt;/h3&gt;

&lt;p&gt;Sometimes you don't just want to filter logs by level or custom fields; you want to filter them based on where they came from. Structlog makes this possible with the &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.processors.CallsiteParameterAdder" rel="noopener noreferrer"&gt;&lt;code&gt;CallsiteParameterAdder&lt;/code&gt;&lt;/a&gt;, which can enrich your event dictionary with details like the module name, function name, line number, or thread ID. Once those fields are available, you can write a processor that decides which events to keep.&lt;/p&gt;

&lt;p&gt;Let's say you have a simple application with two operations: processing an order and canceling an order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Order processed successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;cancel_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Order canceled&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;process_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ord_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;cancel_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ord_456&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"order_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ord_123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T13:59:03.397454Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Order processed successfully"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"func_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"process_order"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"order_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ord_456"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"warning"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T13:59:03.397618Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Order canceled"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"func_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cancel_order"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, suppose you only care about logs from &lt;code&gt;process_order&lt;/code&gt; and want to ignore everything else. You can add a custom processor that drops events from the unwanted function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;filter_out_cancellations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;func_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cancel_order&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DropEvent&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;

&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CallsiteParameterAdder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CallsiteParameter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FUNC_NAME&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;filter_out_cancellations&lt;/span&gt; &lt;span class="c1"&gt;# must be placed after CallsiteParameterAdder
&lt;/span&gt;    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration, calling both functions again yields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"order_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ord_123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T13:59:03.397454Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Order processed successfully"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"func_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"process_order"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;cancel_order&lt;/code&gt; log entry has now been filtered out.&lt;/p&gt;

&lt;p&gt;The reason this setup works is that &lt;code&gt;CallsiteParameterAdder&lt;/code&gt; adds details about where the log call was made, such as the function name. Once that information is present in the event dictionary, the custom &lt;code&gt;filter_out_cancellations&lt;/code&gt; processor can examine it and decide what to do. If the function name matches &lt;code&gt;cancel_order&lt;/code&gt;, it raises &lt;code&gt;DropEvent&lt;/code&gt;, which tells Structlog to discard the log entirely.&lt;/p&gt;

&lt;p&gt;Because processors are executed in order, the event first gains the extra metadata, then it is evaluated by the filter, and finally the surviving events are handed off to the renderer. The result is that only logs from &lt;code&gt;process_order&lt;/code&gt; appear in the output, while logs from &lt;code&gt;cancel_order&lt;/code&gt; are silently filtered out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing logs to files
&lt;/h2&gt;

&lt;p&gt;The 12-Factor App methodology &lt;a href="https://12factor.net/logs" rel="noopener noreferrer"&gt;recommends writing logs to standard output&lt;/a&gt; and letting the platform handle collection, and that's still the best approach in containerized and cloud environments. However, some deployments do require logs to be written directly to files.&lt;/p&gt;

&lt;p&gt;In such cases, you can configure the &lt;code&gt;PrintLoggerFactory&lt;/code&gt; as follows so logs are sent to a file instead of &lt;code&gt;stdout&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pathlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;

&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[...],&lt;/span&gt;
    &lt;span class="n"&gt;logger_factory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;PrintLoggerFactory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;with_suffix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.log&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use the &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.WriteLoggerFactory" rel="noopener noreferrer"&gt;WriteLoggerFactory&lt;/a&gt; which the documentation claims is "a little faster" than &lt;code&gt;PrintLoggerFactory&lt;/code&gt; at the cost of some versatility.&lt;/p&gt;

&lt;p&gt;Structlog itself doesn't handle rotation or retention, but leaves such tasks to a dedicated system utility like &lt;a href="https://www.dash0.com/guides/log-rotation-linux-logrotate" rel="noopener noreferrer"&gt;Logrotate&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mastering contextual logging
&lt;/h2&gt;

&lt;p&gt;The single most important practice that separates log records from a stream of messages into a true observability signal is &lt;strong&gt;context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In Structlog, every logging call lets you attach structured key--value pairs alongside your message. These fields travel with the log event through the processor pipeline and end up in the final output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User profile updated&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usr_f4b7a1c2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;req_9e8d5c3a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;duration_ms&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;54.3&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of just a sentence like "User profile updated" you now have rich, machine-readable details: which user was affected, which request triggered the change, whether it succeeded, and how long it took.&lt;/p&gt;

&lt;p&gt;You can also &lt;a href="https://www.structlog.org/en/stable/bound-loggers.html" rel="noopener noreferrer"&gt;bind context to a logger&lt;/a&gt; so that it's included automatically in every message from that logger. For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# bind() returns a copy of `logger` with user_id added to its context
&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usr_f4b7a1c2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fetching user profile&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Profile fetched successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both log lines will now include the &lt;code&gt;user_id&lt;/code&gt; field without you having to repeat it each time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr_f4b7a1c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T09:00:25.822429Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Fetching user profile"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr_f4b7a1c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T09:00:25.822570Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Profile fetched successfully"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If later on you decide to remove the bound fields from the logger's context, you can use &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.BoundLoggerBase.unbind" rel="noopener noreferrer"&gt;&lt;code&gt;unbind()&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://www.structlog.org/en/stable/api.html#structlog.BoundLoggerBase.try_unbind" rel="noopener noreferrer"&gt;&lt;code&gt;try_unbind()&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unbind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# will throw an error if the key doesn't exist
&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;try_unbind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# missing keys are ignored
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reliable context propagation in web apps
&lt;/h3&gt;

&lt;p&gt;While &lt;code&gt;bind()&lt;/code&gt; is useful, it has limitations in highly concurrent environments like web applications. You don't want to pass a request-specific logger instance down through every function call as it quickly becomes clumsy.&lt;/p&gt;

&lt;p&gt;A much more powerful and elegant solution is to use &lt;strong&gt;context variables&lt;/strong&gt; through &lt;a href="https://www.structlog.org/en/stable/contextvars.html" rel="noopener noreferrer"&gt;&lt;code&gt;structlog.contextvars&lt;/code&gt;&lt;/a&gt;. This takes advantage of Python's &lt;code&gt;contextvars&lt;/code&gt; module to store context that is scoped to the current thread or async task.&lt;/p&gt;

&lt;p&gt;Each request (or background job) gets its own isolated context, so you never have to worry about data leaking between concurrent executions.&lt;/p&gt;

&lt;p&gt;That's why our production configuration includes the &lt;code&gt;structlog.contextvars.merge_contextvars&lt;/code&gt; processor for pulling the context into each log event automatically.&lt;/p&gt;

&lt;p&gt;All you need to do is bind values at the beginning of a request or task, and those values will show up in every log line until the context is cleared.&lt;/p&gt;

&lt;p&gt;Here's an example of how you might set this up in a FastAPI middleware:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HTTPException&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;

&lt;span class="c1"&gt;# ...your structlog configuration
&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;context_middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;start_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# This is the request ID that will be attached to all logs for this request
&lt;/span&gt;    &lt;span class="n"&gt;request_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Request-ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uuid4&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
    &lt;span class="n"&gt;client_ip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;user_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Clear any existing context from previous requests
&lt;/span&gt;    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contextvars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear_contextvars&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;# All logs produced in this request will share the same request_id
&lt;/span&gt;    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contextvars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind_contextvars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;request_logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;request_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Incoming %s request to %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;duration_ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start_time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;

    &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ERROR&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WARNING&lt;/span&gt;

    &lt;span class="n"&gt;request_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%s %s completed with status %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;duration_ms&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;


    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/users/{user_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_user_profile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# You can even add more context, scoped just to this function
&lt;/span&gt;    &lt;span class="c1"&gt;# and any downstream functions
&lt;/span&gt;    &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contextvars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind_contextvars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User profile requested.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;HTTPException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Item not found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Successfully retrieved user profile.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uvicorn&lt;/span&gt;

    &lt;span class="n"&gt;uvicorn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nb"&gt;reload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;access_log&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, every request to your API gets its own request ID bound to the logging context, so that it's automatically included in all log messages during that request, without you having to pass a logger or &lt;code&gt;request_id&lt;/code&gt; around manually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/users/12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"client_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl/8.7.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"510ec4b6-f27f-4380-9082-487a3193094e"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T10:33:03.077963Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Incoming GET request to /users/12"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"510ec4b6-f27f-4380-9082-487a3193094e"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T10:33:03.078208Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User profile requested."&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the scope of a function you can continue to use &lt;code&gt;bind()&lt;/code&gt; if you intend to add temporary fields that are only relevant to a narrow slice of work (see the &lt;code&gt;request_logger&lt;/code&gt; for access logging), or call &lt;code&gt;structlog.contextvars.bind_contextvars()&lt;/code&gt; again to add new fields and pass them on to loggers in downstream functions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding global context
&lt;/h3&gt;

&lt;p&gt;If you need some variables to appear in every single log record regardless of how the logger is obtained, you can add them with a processor that runs for each event.&lt;/p&gt;

&lt;p&gt;You only need to capture the values at startup, then merge them into the event dictionary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;_os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;

&lt;span class="n"&gt;APP_VERSION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2.4.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_global_fields_factory&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;APP_ENV&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;development&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;REGION&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;host&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gethostname&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;pid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getpid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;APP_VERSION&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_global_fields&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;version&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pid&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;add_global_fields&lt;/span&gt;

&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nf"&gt;add_global_fields_factory&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;# add this as the first processor
&lt;/span&gt;        &lt;span class="c1"&gt;# [...]
&lt;/span&gt;    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each record from the service will now contain the global fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user-service"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"development"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2.4.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MacBook-Pro.local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"pid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;82904&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"4af0acc1-1064-4470-a538-bb9862cd2154"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T10:18:14.194947Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Successfully retrieved user profile."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Capturing Python errors and exceptions
&lt;/h2&gt;

&lt;p&gt;When an error occurs in production, your logs are your first and often best debugging tool. A plain traceback string is helpful, but a structured exception record is far more powerful. This is exactly what the &lt;code&gt;dict_tracebacks&lt;/code&gt; processor gives you.&lt;/p&gt;

&lt;p&gt;The key to this is using &lt;code&gt;logger.exception()&lt;/code&gt;. While you can log errors with &lt;code&gt;logger.error()&lt;/code&gt;, using &lt;code&gt;logger.exception()&lt;/code&gt; inside an &lt;code&gt;except&lt;/code&gt; block is the preferred pattern. It automatically captures the active exception and passes it through the pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dict_tracebacks&lt;/span&gt; &lt;span class="c1"&gt;# ensure dict_tracebacks is configured
&lt;/span&gt;        &lt;span class="c1"&gt;# [...]
&lt;/span&gt;    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dividing by zero&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;dict_tracebacks&lt;/code&gt; processor enabled, the resulting JSON log contains a fully structured representation of the exception. Here's a simplified example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dividing by zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-06T12:28:16.625474Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"exception"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"exc_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ZeroDivisionError"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"exc_value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"division by zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"frames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"filename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main.py"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"lineno"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;module&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"locals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"logger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;BoundLoggerLazyProxy ...&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of a traceback string, you now have a structured object that exposes the exception type, value, and even stack frames. This structure unlocks powerful new workflows in your log aggregation system. For example, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Query for all logs where &lt;code&gt;exception.exc_type&lt;/code&gt; equals &lt;code&gt;ZeroDivisionError&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;Count how many errors originated in a function by filtering on &lt;code&gt;exception.frames[0].name&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;Trigger alerts if &lt;code&gt;exception.exc_value&lt;/code&gt; contains a specific string.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With structured tracebacks, your logs become more than just text. They become queryable data that dramatically reduces the time it takes to detect, diagnose, and fix production issues.&lt;/p&gt;

&lt;p&gt;In development environments, you can install the &lt;a href="https://github.com/Textualize/rich" rel="noopener noreferrer"&gt;rich&lt;/a&gt; library to render a colorful traceback in the terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn4k7ppi4l4h0tekygbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn4k7ppi4l4h0tekygbd.png" alt="Colorful Python tracebacks with Structlog and Rich" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Structlog with OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;Structlog does not automatically add trace or span identifiers to your logs. To correlate logs with traces, you attach those fields yourself with a small processor that reads the current OpenTelemetry span and injects its IDs into the event dictionary.&lt;/p&gt;

&lt;p&gt;Once in place, every log written inside an active span will carry &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;span_id&lt;/code&gt;, which makes it possible to see your spans and logs in the same context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgz4mp442h5zcnfos64d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgz4mp442h5zcnfos64d.png" alt="Dash0 trace view showing spans and their correlated log events" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a compact setup that wires Structlog to OpenTelemetry and adds the two IDs on every event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TracerProvider&lt;/span&gt;

&lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TracerProvider&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# [... rest of your tracing configuration]
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_open_telemetry_spans&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_current_span&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_recording&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;

    &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_span_context&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;span_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;span_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;016x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;event_dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;trace_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;032x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event_dict&lt;/span&gt;

&lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="c1"&gt;# [...]
&lt;/span&gt;        &lt;span class="n"&gt;add_open_telemetry_spans&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;JSONRenderer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# must be the last one
&lt;/span&gt;    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;structlog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;process order&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Order processed successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;process_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ord_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the &lt;code&gt;logger.info()&lt;/code&gt; call runs inside an active span, your JSON log will now include the OpenTelemetry identifiers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"order_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ord_123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-07T07:13:43.906388Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"msg"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Order processed successfully"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"span_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"da8405273d89b065"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"442d81cb25de382054575e33c1a659df"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What's left is bringing your logs into an OpenTelemetry-native platform like &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt; where they can be filtered and correlated with other signals like &lt;a href="https://www.dash0.com/knowledge/logs-metrics-and-traces-observability" rel="noopener noreferrer"&gt;metrics and traces&lt;/a&gt; to give you a complete picture of your system's health.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq45hu846h5x9ue0e6mz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq45hu846h5x9ue0e6mz5.png" alt="Dash0 log view" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the time of writing, there is no way to export &lt;a href="https://www.dash0.com/knowledge/opentelemetry-protocol-otlp#how-logs-are-represented-in-otlp" rel="noopener noreferrer"&gt;native OTLP logs&lt;/a&gt; directly from Structlog, so you have two options.&lt;/p&gt;

&lt;p&gt;First, keep Structlog writing JSON to standard output or a file and let the &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; convert it to &lt;a href="https://opentelemetry.io/docs/specs/otel/logs/data-model/" rel="noopener noreferrer"&gt;OTLP log schema&lt;/a&gt;, before forwarding to your backend. This keeps the application simple and pushes protocol concerns to the infrastructure.&lt;/p&gt;

&lt;p&gt;The second option requires that you &lt;a href="https://www.structlog.org/en/stable/standard-library.html" rel="noopener noreferrer"&gt;bridge Structlog to the standard logging ecosystem&lt;/a&gt; and attach an OTLP-capable handler. You'll need to configure &lt;code&gt;structlog.stdlib.LoggerFactory()&lt;/code&gt; and a &lt;code&gt;ProcessorFormatter&lt;/code&gt;, then attach a handler that exports to an endpoint speaking OTLP (usually the OTel Collector).&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Structlog turns Python logging into a stream of structured events rather than plain text. With its processor pipeline, you can enrich logs with context, filter noise, and render output in formats that suit both humans and machines.&lt;/p&gt;

&lt;p&gt;We began with simple console logs, then moved to production configurations with JSON rendering, timestamps, and structured exceptions. We explored how &lt;code&gt;bind()&lt;/code&gt; and &lt;code&gt;contextvars&lt;/code&gt; add valuable context, how callsite parameters provide control, and how integration with OpenTelemetry connects logs to traces.&lt;/p&gt;

&lt;p&gt;The takeaway is clear: logs are data. Treating them as structured signals with Structlog makes debugging, monitoring, and operating modern applications far more effective.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>python</category>
      <category>logging</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Mastering Docker Logs: A Comprehensive Tutorial</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Fri, 14 Nov 2025 13:15:43 +0000</pubDate>
      <link>https://forem.com/dash0/mastering-docker-logs-a-comprehensive-tutorial-55l0</link>
      <guid>https://forem.com/dash0/mastering-docker-logs-a-comprehensive-tutorial-55l0</guid>
      <description>&lt;p&gt;You've just deployed a new feature. It's not on fire, but it's not quite right either. An API response is missing a field, and performance seems a bit off. Where do you begin to unravel the mystery? &lt;strong&gt;You start with the logs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a containerized environment, however, logging isn't always straightforward. Logs are ephemeral, dispersed across multiple containers, and can grow unmanageable without the right strategy.&lt;/p&gt;

&lt;p&gt;This guide covers everything you need to know about Docker logs. We'll start with the simplest commands to view logs in real-time and progress to designing a robust, production-grade logging strategy for your entire containerized infrastructure.&lt;/p&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start: the &lt;code&gt;docker logs&lt;/code&gt; command reference
&lt;/h2&gt;

&lt;p&gt;For when you need answers &lt;em&gt;now&lt;/em&gt;, here are the most common commands you'll reach for often:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;View all logs for a container&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs &amp;lt;container&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Follow logs in real-time (tail)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs -f &amp;lt;container&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tail the last 100 lines&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs --tail 100 &amp;lt;container&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;View logs from the last 15 minutes&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs --since 15m &amp;lt;container&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;View logs for a Docker Compose service&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker compose logs &amp;lt;service&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Follow logs for all Compose services&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker compose logs -f&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Remove the service prefix in Docker Compose logs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker compose logs --no-log-prefix &amp;lt;service&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Understanding how Docker logging works
&lt;/h2&gt;

&lt;p&gt;Docker is designed to capture the &lt;strong&gt;standard output&lt;/strong&gt; (&lt;code&gt;stdout&lt;/code&gt;) and &lt;strong&gt;standard error&lt;/strong&gt; (&lt;code&gt;stderr&lt;/code&gt;) streams from the main process running inside a container.&lt;/p&gt;

&lt;p&gt;This means that if you are containerizing your own services, you should ensure that they're writing their logs to &lt;code&gt;stdout&lt;/code&gt; or &lt;code&gt;stderr&lt;/code&gt; so that Docker's built-in logging system can capture them.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;logging driver&lt;/strong&gt; acts as the backend for these logs. It receives the log streams from the container and determines whether to store them in a file or forward them to an endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where Docker stores container logs
&lt;/h3&gt;

&lt;p&gt;By default, Docker uses the &lt;a href="https://docs.docker.com/engine/logging/drivers/json-file/" rel="noopener noreferrer"&gt;json-file logging driver&lt;/a&gt; to write the captured container logs to a file on the host machine.&lt;/p&gt;

&lt;p&gt;Here's the typical Docker logs location for a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/lib/docker/containers/&amp;lt;container-id&amp;gt;/&amp;lt;container-id&amp;gt;-json.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find the log file path for a specific container with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{.LogPath}}'&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/lib/docker/containers/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac-json.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In most cases, you won't need to interact with these log files directly as it's what &lt;code&gt;docker logs&lt;/code&gt; reads from behind the scenes.&lt;/p&gt;

&lt;p&gt;If you're unsure of what logging driver a container uses, you can confirm with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{.HostConfig.LogConfig.Type}}'&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Viewing container logs with the &lt;code&gt;docker logs&lt;/code&gt; command
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;docker logs&lt;/code&gt; command is the primary way to inspect the logs of a running or stopped container. It is a shorthand for the full &lt;code&gt;docker container logs&lt;/code&gt; command and can be used interchangeably.&lt;/p&gt;

&lt;p&gt;Here's the built-in usage reference for quick context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Usage:  docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Aliases:
  docker container logs, docker logs

Options:
      --details        Show extra details provided to logs
  -f, --follow         Follow log output
      --since string   Show logs since timestamp (e.g. "2013-01-02T13:23:37Z") or relative (e.g. "42m" for 42 minutes)
  -n, --tail string    Number of lines to show from the end of the logs (default "all")
  -t, --timestamps     Show timestamps
      --until string   Show logs before a timestamp (e.g. "2013-01-02T13:23:37Z") or relative (e.g. "42m" for 42 minutes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view all available logs for a container, simply pass its name or ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This dumps the &lt;strong&gt;entire log history&lt;/strong&gt; of the specified container to your terminal, which is probably not what you're after.&lt;/p&gt;

&lt;p&gt;For a container that's been running for a while, or one that's particularly noisy, this can mean scrolling through thousands of lines of output.&lt;/p&gt;

&lt;p&gt;To isolate the specific information you need, you can use Docker's built-in filtering flags to narrow the output by time or by the number of lines.&lt;/p&gt;

&lt;p&gt;Let's explore the most useful options next. Note that all options must come &lt;em&gt;before&lt;/em&gt; the container name or ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="o"&gt;[&lt;/span&gt;&amp;lt;options&amp;gt;] &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If &lt;code&gt;docker logs&lt;/code&gt; isn't showing anything, you may be hitting one of the common pitfalls. See the troubleshooting section to learn how to fix it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filtering logs by time (&lt;code&gt;--since&lt;/code&gt; and &lt;code&gt;--until&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;To constrain &lt;code&gt;docker logs&lt;/code&gt; output to a specific time window, you can use a combination of the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--since&lt;/code&gt;: Shows logs generated &lt;em&gt;after&lt;/em&gt; a specified point in time.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--until&lt;/code&gt;: Shows logs generated &lt;em&gt;before&lt;/em&gt; a specified point in time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With either flag, you can provide a relative time (like &lt;code&gt;10m&lt;/code&gt; for 10 minutes, &lt;code&gt;3h&lt;/code&gt; for 3 hours) or an absolute timestamp (such as &lt;code&gt;2025-06-13T10:30:00&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Show logs from the last 30 minutes&lt;/span&gt;
docker logs &lt;span class="nt"&gt;--since&lt;/span&gt; 30m &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Show logs from this morning, before 10 AM&lt;/span&gt;
docker logs &lt;span class="nt"&gt;--until&lt;/span&gt; 2025-06-13T10:00:00 &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also combine the two:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="nt"&gt;--since&lt;/span&gt; 2025-06-13T18:00:00 &lt;span class="nt"&gt;--until&lt;/span&gt; 2025-06-13T18:15:00 &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Following and tailing Docker container logs
&lt;/h3&gt;

&lt;p&gt;While filtering logs by time helps you understand historical events, the most common task while troubleshooting is to see what's happening &lt;em&gt;right now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This means continuously monitoring and displaying the end of a log file in real-time as new entries are added.&lt;/p&gt;

&lt;p&gt;To follow Docker container logs, use the &lt;code&gt;-f&lt;/code&gt; or &lt;code&gt;--follow&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, this will print the container's entire log history before it starts streaming new entries, which isn't ideal.&lt;/p&gt;

&lt;p&gt;The most effective pattern is to combine &lt;code&gt;--follow&lt;/code&gt; with &lt;code&gt;--tail&lt;/code&gt; (or its shorthand &lt;code&gt;-n&lt;/code&gt;). This gives you the best of both worlds: a small amount of recent history for context, followed by the live stream:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;--tail&lt;/span&gt; 100 &amp;lt;container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, only the last 100 lines are displayed for context, and then new log messages stream in real time. When you want to stop streaming, press &lt;code&gt;Ctrl+C&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filtering Docker logs with &lt;code&gt;grep&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;docker logs&lt;/code&gt; command doesn't have a built-in grepping feature, but you can easily pipe its output to standard shell utilities like &lt;code&gt;grep&lt;/code&gt; to quickly search for a specific string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &amp;lt;container_name_or_id&amp;gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"ERROR"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps you filter out noise and return only the log lines that match your search term.&lt;/p&gt;

&lt;h3&gt;
  
  
  Saving Docker logs to a file
&lt;/h3&gt;

&lt;p&gt;If you need to store a subset of your logs for later analysis, you can redirect the output of &lt;code&gt;docker logs&lt;/code&gt; to a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &amp;lt;container_name_or_id&amp;gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; app.log       &lt;span class="c"&gt;# overwrite&lt;/span&gt;
docker logs &amp;lt;container_name_or_id&amp;gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; app.log      &lt;span class="c"&gt;# append&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also combine this with filters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="nt"&gt;--since&lt;/span&gt; 10m &amp;lt;container_name_or_id&amp;gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; recent.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the simplest way to capture Docker logs to a file without changing any logging configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing logs in Docker Compose
&lt;/h2&gt;

&lt;p&gt;Most Docker projects use Docker Compose, and managing logs there is just as straightforward. The main difference is that you use &lt;code&gt;docker compose logs&lt;/code&gt; rather than &lt;code&gt;docker logs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The usage syntax is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="o"&gt;[&lt;/span&gt;options] &lt;span class="o"&gt;[&lt;/span&gt;service...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;[service...]&lt;/code&gt; is an optional list of service names. Since a single service may be running across multiple containers, Docker Compose automatically aggregates the logs from all containers that belong to that service.&lt;/p&gt;

&lt;p&gt;Let's look at a few common usage patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing logs for a single service
&lt;/h3&gt;

&lt;p&gt;To see logs from just one service defined in your Compose file, specify the service name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs image-provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also specify multiple service names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs image-provider shipping otel-collector
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker Compose will color-code the output by service, making it easy to follow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03wal15o52zi6qh89bmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03wal15o52zi6qh89bmy.png" alt="Docker compose logs output" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For ease of copying and pasting log lines, you'll want to include the &lt;code&gt;--no-log-prefix&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;--no-log-prefix&lt;/span&gt; &amp;lt;services&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Viewing logs for all services
&lt;/h3&gt;

&lt;p&gt;To see an interleaved stream of logs from all services in your stack, run the command without a service name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tailing and filtering
&lt;/h3&gt;

&lt;p&gt;All the flags you learned for &lt;code&gt;docker logs&lt;/code&gt; for tailing and filtering work with &lt;code&gt;docker compose logs&lt;/code&gt; too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;--follow&lt;/span&gt; &lt;span class="nt"&gt;--tail&lt;/span&gt; 10 image-provider cart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;--since&lt;/span&gt; &lt;span class="s1"&gt;'10m'&lt;/span&gt; db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inspecting Docker container logs with a GUI
&lt;/h2&gt;

&lt;p&gt;If you would rather inspect container logs visually, using a Docker log viewer can make it much easier to browse, filter, and search container logs without relying on the command line.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Docker Desktop logs
&lt;/h3&gt;

&lt;p&gt;The built-in dashboard in Docker Desktop has a &lt;strong&gt;Logs&lt;/strong&gt; tab for any running container. It provides a simple, real-time view with basic search functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzktagqsnmlwoj2bptfii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzktagqsnmlwoj2bptfii.png" alt="Docker Desktop showing OpenTelemetry Collector logs" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Dozzle
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://dozzle.dev/" rel="noopener noreferrer"&gt;Dozzle&lt;/a&gt; is a lightweight, web-based log viewer with a slick interface. It's incredibly easy to run as a Docker container itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; dozzle &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; 8888:8080 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--volume&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\&lt;/span&gt;
    amir20/dozzle:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to &lt;code&gt;http://localhost:8888&lt;/code&gt; in your browser to get a real-time view of all your container logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo31aorthhjd6e6k9bthu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo31aorthhjd6e6k9bthu.png" alt="Viewing Docker logs in Dozzle" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a Docker logging driver
&lt;/h2&gt;

&lt;p&gt;While &lt;code&gt;json-file&lt;/code&gt; is the default, Docker supports a variety of other &lt;a href="https://docs.docker.com/engine/logging/configure/#supported-logging-drivers" rel="noopener noreferrer"&gt;logging drivers&lt;/a&gt; to suit different needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;none&lt;/code&gt;: Disables logging entirely which is useful when logs are unnecessary or handled externally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;local&lt;/code&gt;: Recommended for most use cases. It offers better performance and more efficient disk usage than &lt;code&gt;json-file&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;syslog&lt;/code&gt;: Sends logs to the system's &lt;code&gt;syslog&lt;/code&gt; daemon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;journald&lt;/code&gt;: Write log output to the &lt;code&gt;journald&lt;/code&gt; logging system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;fluentd&lt;/code&gt;, &lt;code&gt;gelf&lt;/code&gt;, &lt;code&gt;awslogs&lt;/code&gt;, &lt;code&gt;gcplogs&lt;/code&gt;, etc.: Forward logs to external logging services or cloud platforms for centralized aggregation and analysis.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting a global logging driver
&lt;/h3&gt;

&lt;p&gt;To set a global logging driver for all Docker containers, you must edit the Docker daemon configuration file at &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt;. If the file doesn't exist, create it first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json-file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-opts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"max-size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"50m"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"max-file"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"4"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"compress"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using Docker Desktop, you can edit this file by opening the app's Settings and selecting &lt;strong&gt;Docker Engine&lt;/strong&gt; from the sidebar:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkclnlo97wxn9g8nw47d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkclnlo97wxn9g8nw47d.png" alt="Editing Docker daemon configuration in Docker Desktop" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;json-file&lt;/code&gt; driver's most significant drawback is that it does not rotate logs by default. Over time, these log files will grow indefinitely, which can consume all available disk space and crash your server.&lt;/p&gt;

&lt;p&gt;This configuration addresses this by telling Docker to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rotate log files when they reach 50MB (&lt;code&gt;max-size&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Keep a maximum of four old log files (&lt;code&gt;max-file&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Compress the rotated log files to save space (&lt;code&gt;compress&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An alternative approach is using the &lt;a href="https://docs.docker.com/engine/logging/drivers/local/" rel="noopener noreferrer"&gt;local driver&lt;/a&gt; since it uses a more compact file format and includes &lt;a href="https://www.dash0.com/guides/log-rotation-linux-logrotate" rel="noopener noreferrer"&gt;log rotation&lt;/a&gt; out of the box:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In production environments, where container logs must be shipped to an external observability platform, you can choose from &lt;a href="https://docs.docker.com/engine/logging/configure/#supported-logging-drivers" rel="noopener noreferrer"&gt;other available logging drivers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example, this section below demonstrates how to bring Docker logs into an &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;OpenTelemetry pipeline&lt;/a&gt; through the &lt;code&gt;fluentd&lt;/code&gt; driver.&lt;/p&gt;

&lt;p&gt;Once you've edited your &lt;code&gt;daemon.json&lt;/code&gt; file, you must restart the Docker daemon for the changes to take effect for &lt;strong&gt;newly created containers&lt;/strong&gt;. Existing containers need to be recreated to adopt the updated configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Overriding the logging driver per container
&lt;/h3&gt;

&lt;p&gt;You can override the global logging driver for individual containers when launching them. This is useful for containers that need different retention or delivery behavior:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-opt&lt;/span&gt; max-size&lt;span class="o"&gt;=&lt;/span&gt;50m &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-opt&lt;/span&gt; max-file&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="se"&gt;\&lt;/span&gt;
  &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using Docker Compose, you can configure the logging driver in your &lt;code&gt;docker-compose.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;&amp;lt;service_name&amp;gt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;image_name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;max-file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
      &lt;span class="na"&gt;max-size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50m"&lt;/span&gt;
      &lt;span class="na"&gt;compress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid repeating the same configuration across multiple services, define a YAML anchor and reuse it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;x-default-logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;logging&lt;/span&gt;
  &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt;
  &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;max-size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50m"&lt;/span&gt;
    &lt;span class="na"&gt;max-file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;&amp;lt;service_a&amp;gt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*logging&lt;/span&gt;

  &lt;span class="na"&gt;&amp;lt;service_b&amp;gt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*logging&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Understanding Docker's log delivery mode
&lt;/h2&gt;

&lt;p&gt;When your application generates a log, it faces a fundamental choice: should it pause to ensure the log is safely delivered, or should it hand the log off quickly and continue its work?&lt;/p&gt;

&lt;p&gt;This is the core trade-off managed by Docker's log delivery mode, a crucial setting that lets you tune your logging for either maximum reliability or maximum performance.&lt;/p&gt;

&lt;p&gt;Docker supports two modes for delivering logs from your container to the configured logging driver.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Blocking mode
&lt;/h3&gt;

&lt;p&gt;In the default blocking mode, log delivery is synchronous. When your application emits a log, it must wait for the Docker logging driver to process and accept that message before it can continue executing.&lt;/p&gt;

&lt;p&gt;This approach is best for scenarios where every log message is critical and you are using a fast, local logging driver like &lt;code&gt;local&lt;/code&gt; or &lt;code&gt;json-file&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With slower drivers (those that send logs over a network), blocking mode can introduce significant latency and even stall your application if the remote logging service is slow or unreachable.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Non-blocking mode
&lt;/h3&gt;

&lt;p&gt;As an alternative, you can configure a non-blocking delivery mode. In this mode, log delivery is asynchronous. When your application emits a log, the message is immediately placed in an in-memory buffer, and your application continues running without any delay. The logs are then sent to the driver from this buffer in the background.&lt;/p&gt;

&lt;p&gt;The trade-off for this mode is a risk of losing logs. If the in-memory buffer fills up faster than the driver can process logs, new incoming messages will be dropped.&lt;/p&gt;

&lt;p&gt;To mitigate the risk of losing logs in non-blocking mode, you can increase the size of the in-memory buffer from its 1MB default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"awslogs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-opts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"non-blocking"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"max-buffer-size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"50m"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Centralizing Docker logs with OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;While local tools work well in development environments, production systems require a unified logging pipeline that ensures container logs are captured and retained, even after the container is long gone.&lt;/p&gt;

&lt;p&gt;By consolidating your Docker logs in an observability platform like &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt;, you'll gain the ability to perform complex searches across your entire infrastructure, build real-time dashboards to visualize trends, and correlate logs with other telemetry signals like metrics or traces.&lt;/p&gt;

&lt;p&gt;One of the most effective ways to ship Docker logs from each host to an observability service is through the OpenTelemetry Collector which supports a variety of log ingestion methods.&lt;/p&gt;

&lt;p&gt;You may be tempted to use the &lt;a href="https://www.dash0.com/guides/opentelemetry-filelog-receiver" rel="noopener noreferrer"&gt;filelog receiver&lt;/a&gt; to read container log files directly, but this is rarely ideal for Docker environments.&lt;/p&gt;

&lt;p&gt;A more effective approach is to set up &lt;a href="https://docs.docker.com/engine/logging/drivers/fluentd/" rel="noopener noreferrer"&gt;fluentd&lt;/a&gt; as the Docker logging driver for your services. This lets Docker stream logs to a Fluentd endpoint without relying on file scraping.&lt;/p&gt;

&lt;p&gt;Here's the configuration you need in your &lt;code&gt;daemon.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fluentd"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-opts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fluentd-address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost:8006"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"opentelemetry-demo"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or when setting up the container from the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;container_name&amp;gt; &lt;span class="nt"&gt;--log-driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fluentd &lt;span class="nt"&gt;--log-opt&lt;/span&gt; fluentd-address&lt;span class="o"&gt;=&lt;/span&gt;localhost:8006 &amp;lt;image&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or in your Docker Compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;&amp;lt;service&amp;gt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;image&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
      &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fluentd-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:8006&lt;/span&gt;
        &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.myapp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then configure the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/fluentforwardreceiver" rel="noopener noreferrer"&gt;fluentforward receiver&lt;/a&gt; in your OpenTelemetry collector configuration to set up a server that's listening at the &lt;code&gt;fluentd-address&lt;/code&gt; specified above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;fluentforward&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:8006&lt;/span&gt;

&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resourcedetection/system&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;detectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;system&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;system&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hostname_sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlphttp/dash0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your_dash0_endpoint&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bearer &amp;lt;your_dash0_token&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;Dash0-Dataset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your_dash0_dataset&amp;gt;&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;fluentforward&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;resourcedetection/system&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlphttp/dash0&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you replace the Dash0 placeholders with &lt;a href="https://www.dash0.com/documentation/dash0/get-started/sending-data-to-dash0" rel="noopener noreferrer"&gt;your actual account values&lt;/a&gt;, you can run the OpenTelemetry Collector as a sidecar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/otelcol.yaml:/etc/otelcol-contrib/config.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  otel/opentelemetry-collector-contrib:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you'll start seeing your logs in the Dash0 interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4u4lkg4pvewz4rhyjo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4u4lkg4pvewz4rhyjo3.png" alt="Dash0 interface showing Docker logs" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting common issues with Docker container logs
&lt;/h2&gt;

&lt;p&gt;Docker logging is generally straightforward, but a few recurring issues can still cause confusion. Here's how to recognize them and resolve them quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;code&gt;docker logs&lt;/code&gt; shows no output
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What's happening&lt;/strong&gt;: Your application likely isn't writing to &lt;code&gt;stdout&lt;/code&gt; or &lt;code&gt;stderr&lt;/code&gt;. It might be logging directly to a file inside the container instead. Since Docker's logging drivers only capture standard output streams, it won't pick up logs written to internal files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to fix it&lt;/strong&gt;: Ideally, update your application's logging configuration to write directly to &lt;code&gt;stdout&lt;/code&gt; or &lt;code&gt;stderr&lt;/code&gt;. If modifying the application isn't feasible, you can redirect file-based logs by creating symbolic links to the appropriate output streams in your &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example for an Nginx image&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /dev/stdout /var/log/nginx/access.log &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /dev/stderr /var/log/nginx/error.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that even file-based logs are routed through Docker's logging mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Logging driver does not support reading
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error response from daemon: configured logging driver does not support reading
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's happening&lt;/strong&gt;: Remote logging drivers such as &lt;code&gt;awslogs&lt;/code&gt;, &lt;code&gt;splunk&lt;/code&gt;, or &lt;code&gt;gelf&lt;/code&gt; forward logs directly to an external system without storing anything locally. Normally, Docker caches the logs using its &lt;a href="https://docs.docker.com/engine/logging/dual-logging/" rel="noopener noreferrer"&gt;dual logging&lt;/a&gt; functionality. However, if this feature is disabled for the container, the &lt;code&gt;docker logs&lt;/code&gt; command can't retrieve any output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to fix it&lt;/strong&gt;: You need to ensure &lt;code&gt;cache-disabled&lt;/code&gt; is &lt;code&gt;false&lt;/code&gt; in the logging options. This tells Docker to send logs to the remote driver &lt;em&gt;and&lt;/em&gt; keep a local copy for &lt;code&gt;docker logs&lt;/code&gt; to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"awslogs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"log-opts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"cache-disabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"false"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Some best practices for Docker container logging
&lt;/h2&gt;

&lt;p&gt;Effective logging in a containerized environment is about more than running &lt;code&gt;docker logs&lt;/code&gt;. Following the guidelines below will help you build a logging setup that holds up in real-world Docker deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Write application logs to stdout and stderr
&lt;/h3&gt;

&lt;p&gt;Docker only captures what your application writes to &lt;code&gt;stdout&lt;/code&gt; and &lt;code&gt;stderr&lt;/code&gt;. Avoid writing logs directly to files inside the container unless you redirect them to these streams.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Use non-blocking mode for network-based logging drivers
&lt;/h3&gt;

&lt;p&gt;Drivers that send logs over the network (such as &lt;code&gt;fluentd&lt;/code&gt;, &lt;code&gt;gelf&lt;/code&gt;, or &lt;code&gt;awslogs&lt;/code&gt;) can back-pressure the application in blocking mode. It's usually better to enable non-blocking mode and tune &lt;code&gt;max-buffer-size&lt;/code&gt; to avoid losing logs during spikes.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Include container metadata in logs
&lt;/h3&gt;

&lt;p&gt;Metadata such as container name, ID, service name, and image version are crucial for troubleshooting containerized workloads in production. In an OpenTelemetry pipeline, such metadata belongs in the &lt;a href="https://www.dash0.com/knowledge/what-are-opentelemetry-resources" rel="noopener noreferrer"&gt;resource attributes&lt;/a&gt;, so that it travels with every log record.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use the &lt;code&gt;local&lt;/code&gt; or &lt;code&gt;json-file&lt;/code&gt; driver with log rotation
&lt;/h3&gt;

&lt;p&gt;If you rely on host-level log storage (via &lt;code&gt;local&lt;/code&gt; or &lt;code&gt;json-file&lt;/code&gt;), always enable rotation through the &lt;code&gt;max-file&lt;/code&gt; and &lt;code&gt;max-size&lt;/code&gt; options. Unrotated log files are a common cause of disk pressure on production nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;You've now journeyed from the basic &lt;code&gt;docker logs&lt;/code&gt; command to understanding the critical importance of logging drivers, log rotation, and centralized logging strategies.&lt;/p&gt;

&lt;p&gt;By mastering these tools and concepts, you're no longer just guessing when things go wrong. You have the visibility you need to build, debug, and run resilient, production-ready applications.&lt;/p&gt;

&lt;p&gt;Whenever possible, &lt;a href="https://www.dash0.com/guides/structured-logging-for-modern-applications" rel="noopener noreferrer"&gt;structure your application's logs as JSON&lt;/a&gt;. A simple text line is hard to parse, but a JSON object with fields like &lt;code&gt;level&lt;/code&gt;, &lt;code&gt;timestamp&lt;/code&gt;, and &lt;code&gt;message&lt;/code&gt; is instantly machine-readable, making your logs infinitely more powerful in any observability platform.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>From Zero to Pipeline: An OpenTelemetry Collector Guide</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Tue, 11 Nov 2025 15:23:38 +0000</pubDate>
      <link>https://forem.com/dash0/from-zero-to-pipeline-an-opentelemetry-collector-guide-58l0</link>
      <guid>https://forem.com/dash0/from-zero-to-pipeline-an-opentelemetry-collector-guide-58l0</guid>
      <description>&lt;p&gt;Your services likely emit a constant steam of telemetry data, but how does it actually travel from your applications and infrastructure to the observability backend where it's stored and analyzed?&lt;/p&gt;

&lt;p&gt;For many, the answer is a chaotic web of vendor-specific agents, direct-to-backend SDK configurations, and disparate data shippers. This setup is brittle, expensive, hard to manage, and locks you into a single vendor's ecosystem.&lt;/p&gt;

&lt;p&gt;There is a better way.&lt;/p&gt;

&lt;p&gt;Instead of managing a maze of point-to-point integrations, we're going to build a &lt;strong&gt;telemetry pipeline&lt;/strong&gt;: a centralized, vendor-neutral system that gives you complete control to collect, enrich, and route your observability data.&lt;/p&gt;

&lt;p&gt;At the heart of this system is the &lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt;. It is a standalone service that acts as a universal receiver, a powerful processing engine, and a flexible dispatcher for telemetry data.&lt;/p&gt;

&lt;p&gt;In this article, we'll build a telemetry pipeline from the ground up. You'll move from configuring basic data ingestion and exporting to discovering several processing techniques and designing complex data flows that help turn raw telemetry into actionable insights.&lt;/p&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  The simplest possible pipeline
&lt;/h2&gt;

&lt;p&gt;Every data pipeline needs an entry point and an exit. We'll start by building the most basic version of an OpenTelemetry pipeline imaginable. The goal is to receive telemetry data and print it directly to the console, confirming that data is flowing correctly before we add complexity.&lt;/p&gt;

&lt;p&gt;The Collector's behavior is defined by a YAML configuration file. For this initial setup, you need to understand three top-level sections: &lt;code&gt;receivers&lt;/code&gt;, &lt;code&gt;exporters&lt;/code&gt;, and &lt;code&gt;service&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Receivers
&lt;/h3&gt;

&lt;p&gt;Receivers are the entry points for all telemetry data coming into the Collector from your applications and infrastructure.&lt;/p&gt;

&lt;p&gt;They're configured to ingest data in various ways such as listening for network traffic, actively polling endpoints, reading from local sources (&lt;a href="https://www.dash0.com/guides/opentelemetry-filelog-receiver" rel="noopener noreferrer"&gt;like files&lt;/a&gt;), or querying infrastructure APIs.&lt;/p&gt;

&lt;p&gt;For example, the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;OTLP receiver&lt;/a&gt; sets up an endpoint that accepts data sent using the &lt;a href="https://www.dash0.com/knowledge/opentelemetry-protocol-otlp" rel="noopener noreferrer"&gt;OpenTelemetry Protocol&lt;/a&gt;, while the &lt;a href="https://www.dash0.com/guides/opentelemetry-prometheus-receiver" rel="noopener noreferrer"&gt;Prometheus receiver&lt;/a&gt; periodically scrapes metrics from specified targets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exporters
&lt;/h3&gt;

&lt;p&gt;Exporters are the final destinations for all telemetry data leaving the Collector after it has been processed.&lt;/p&gt;

&lt;p&gt;They're responsible for translating data into the required format and transmitting it to various backend systems, such as observability platforms, databases, or message queues.&lt;/p&gt;

&lt;p&gt;For example, the &lt;a href="https://www.dash0.com/guides/opentelemetry-otlp-http-exporter" rel="noopener noreferrer"&gt;otlphttp exporter&lt;/a&gt; can send data to any OTLP-compatible backend over HTTP, while the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/debugexporter" rel="noopener noreferrer"&gt;debug exporter&lt;/a&gt; simply writes telemetry data to the console for debugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;service&lt;/code&gt; section is the central orchestrator that activates and defines the flow of data through the Collector. No component is active unless it is enabled here.&lt;/p&gt;

&lt;p&gt;It works by defining &lt;code&gt;pipelines&lt;/code&gt; for each signal type (&lt;a href="https://www.dash0.com/knowledge/logs-metrics-and-traces-observability" rel="noopener noreferrer"&gt;traces, metrics, or logs&lt;/a&gt;). Each pipeline specifies the exact path data will take by linking receivers, processors, and exporters.&lt;/p&gt;

&lt;p&gt;For example, a &lt;code&gt;traces&lt;/code&gt; pipeline could be configured to receive span data over OTLP, and fan it out to Jaeger through the &lt;a href="https://www.dash0.com/guides/otlp-grpc-exporter" rel="noopener noreferrer"&gt;OTLP exporter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To see the three components in action, let's create our first configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:4317&lt;/span&gt;

&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration creates a simple pipeline for logs alone. It sets up an &lt;code&gt;otlp&lt;/code&gt; receiver to accept log data sent over GRPC on port 4317. Any logs received are immediately passed without any processing to the &lt;code&gt;debug&lt;/code&gt; exporter, which then prints the full, detailed content to the Collector's &lt;code&gt;stderr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To test your pipeline, you need an application that can generate and send telemetry data. A convenient tool for this is &lt;a href="https://github.com/krzko/otelgen" rel="noopener noreferrer"&gt;otelgen&lt;/a&gt;, which produces synthetic logs, traces, and metrics.&lt;/p&gt;

&lt;p&gt;You can define and run the Collector and the &lt;code&gt;otelgen&lt;/code&gt; tool using the following Docker Compose configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelcol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel/opentelemetry-collector-contrib:0.129.1&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otelcol&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./otelcol.yaml:/etc/otelcol-contrib/config.yaml&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;

  &lt;span class="na"&gt;otelgen&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/krzko/otelgen:v0.5.2&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otelgen&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--otel-exporter-otlp-endpoint"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;otelcol:4317"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--insecure"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logs"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;multi"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;otelcol&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;otelgen&lt;/code&gt; service is configured via its command arguments to send telemetry that matches our Collector's setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--otel-exporter-otlp-endpoint otelcol:4317&lt;/code&gt;: Tells &lt;code&gt;otelgen&lt;/code&gt; to send data to the &lt;code&gt;otelcol&lt;/code&gt; service on port 4317.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--insecure&lt;/code&gt;: Disables TLS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logs&lt;/code&gt;: Instructs &lt;code&gt;otelgen&lt;/code&gt; to generate log data specifically.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;multi&lt;/code&gt;: A subcommand that generates a continuous, varied stream of logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see it in action, start both services in detached mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once running, the &lt;code&gt;otelcol&lt;/code&gt; service listens for OTLP data over gRPC on port 4317, and the &lt;code&gt;otelgen&lt;/code&gt; service generates and sends a continuous stream of logs to it.&lt;/p&gt;

&lt;p&gt;You can monitor the Collector's output to verify that it's receiving the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs otelcol &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;--no-log-prefix&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a continuous stream of log data being printed to the console. A single log entry will be formatted like this, showing rich contextual information like the severity, body, and various attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
Resource attributes:
     -&amp;gt; host.name: Str(node-1)
     -&amp;gt; k8s.container.name: Str(otelgen)
     -&amp;gt; k8s.namespace.name: Str(default)
     -&amp;gt; k8s.pod.name: Str(otelgen-pod-ab06ca8b)
     -&amp;gt; service.name: Str(otelgen)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope otelgen
LogRecord #0
ObservedTimestamp: 2025-07-06 11:21:57.085421018 +0000 UTC
Timestamp: 2025-07-06 11:21:57.085420886 +0000 UTC
SeverityText: Error
SeverityNumber: Error(17)
Body: Str(Log 3: Error phase: finish)
Attributes:
     -&amp;gt; worker_id: Str(3)
     -&amp;gt; service.name: Str(otelgen)
     -&amp;gt; trace_id: Str(46287c1c7b7eebea22af2b48b97f4a49)
     -&amp;gt; span_id: Str(f5777521efe11f94)
     -&amp;gt; trace_flags: Str(01)
     -&amp;gt; phase: Str(finish)
     -&amp;gt; http.method: Str(PUT)
     -&amp;gt; http.status_code: Int(403)
     -&amp;gt; http.target: Str(/api/v1/resource/3)
     -&amp;gt; k8s.pod.name: Str(otelgen-pod-8f215fc5)
     -&amp;gt; k8s.namespace.name: Str(default)
     -&amp;gt; k8s.container.name: Str(otelgen)
Trace ID:
Span ID:
Flags: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding the &lt;code&gt;debug&lt;/code&gt; exporter output
&lt;/h3&gt;

&lt;p&gt;The output from the &lt;code&gt;debug&lt;/code&gt; exporter shows the structured format of OpenTelemetry data (OTLP). It's hierarchical, starting from the resource that generated the telemetry all the way down to the individual telemetry record. Let's break down what you're seeing.&lt;/p&gt;

&lt;h4&gt;
  
  
  ResourceLogs and Resource attributes
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc7zhr7xrzgm2sayxkgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc7zhr7xrzgm2sayxkgf.png" alt="ResourceLog and Resource attributes" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ResourceLog #0&lt;/code&gt;: This is the top-level container. The &lt;code&gt;#0&lt;/code&gt; indicates it's the first resource in this batch, which means all telemetry within this block comes from the same resource.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Resource attributes&lt;/code&gt;: These are key-value pairs that describe the entity that produced the log. This could be a service, a container, or a host machine. In the example, attributes like &lt;code&gt;service.name&lt;/code&gt; and &lt;code&gt;k8s.pod.name&lt;/code&gt; apply to every log generated by this resource.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ScopeLogs
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypt7gv7usnyctgjc2ebj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypt7gv7usnyctgjc2ebj.png" alt="ScopeLogs" width="703" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ScopeLogs #0&lt;/code&gt;: Within a resource, telemetry is grouped by its origin, known as the instrumentation scope. This block contains a batch of logs from the same scope.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;InstrumentationScope&lt;/code&gt;: This identifies the specific library or module that generated the log (in this case, &lt;code&gt;otelgen&lt;/code&gt;). This is useful for knowing which part of your application emitted the log.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  LogRecord
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lsezf8sh3zmo17kcjbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lsezf8sh3zmo17kcjbs.png" alt="LogRecord" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within a single &lt;code&gt;ResourceLog&lt;/code&gt; block, you may see multiple &lt;code&gt;LogRecord&lt;/code&gt; entries (&lt;code&gt;#0&lt;/code&gt;, &lt;code&gt;#1&lt;/code&gt;, &lt;code&gt;#2&lt;/code&gt;, and so on), all belonging to the same resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;LogRecord #0&lt;/code&gt;: This is the first log entry belonging to the resource. The key fields are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Timestamp&lt;/code&gt;: When the event occurred.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SeverityNumber&lt;/code&gt; / &lt;code&gt;SeverityText&lt;/code&gt;: &lt;a href="https://www.dash0.com/knowledge/log-levels" rel="noopener noreferrer"&gt;The log level&lt;/a&gt;, such as &lt;code&gt;ERROR&lt;/code&gt; or &lt;code&gt;INFO&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Body&lt;/code&gt;: The actual log message content.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Attributes&lt;/code&gt;: Key-value pairs that provide context specific to this single log event.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Trace ID&lt;/code&gt; / &lt;code&gt;Span ID&lt;/code&gt;: When populated, they directly link a log to a specific trace and span, allowing you to easily correlate logs and traces in your observability backend.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Congratulations, you've built and verified your first telemetry pipeline! It's simple, but it establishes the fundamental flow of data from a source, through the Collector, and to an exit point. Now, let's make it more powerful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Processing and transforming telemetry
&lt;/h2&gt;

&lt;p&gt;Right now, your pipeline is just an empty conduit. Data goes in one end and comes out the other untouched. The real power of the Collector lies in its ability to process data in-flight. This is where &lt;code&gt;processors&lt;/code&gt; come in.&lt;/p&gt;

&lt;p&gt;Processors are intermediary components in a pipeline that can inspect, modify, filter, or enrich your telemetry. Let's add a few essential processors to solve common problems and make the pipeline more intelligent.&lt;/p&gt;

&lt;p&gt;Our new pipeline flow will look like this: &lt;code&gt;Receiver -&amp;gt; [Processors] -&amp;gt; Exporter&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Batching telemetry for efficiency
&lt;/h3&gt;

&lt;p&gt;Sending every single span or metric individually over the network is incredibly inefficient. It creates high network traffic and puts unnecessary load on the backend. &lt;a href="https://www.dash0.com/guides/opentelemetry-batch-processor" rel="noopener noreferrer"&gt;The batch processor&lt;/a&gt; solves this by grouping telemetry into batches before exporting.&lt;/p&gt;

&lt;p&gt;Go ahead and add it to your &lt;code&gt;processors&lt;/code&gt; section. By default, it buffers data for a short period to create batches automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# Add this top-level 'processors' section&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# You can customize the default values for more control&lt;/span&gt;
    &lt;span class="c1"&gt;# send_batch_size: 8192&lt;/span&gt;
    &lt;span class="c1"&gt;# timeout: 200ms&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="c1"&gt;# Add the processor to your pipeline's execution path.&lt;/span&gt;
      &lt;span class="c1"&gt;# Order matters here if you have multiple processors.&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this simple addition, your pipeline now buffers data for up to 200 milliseconds or until it has 8192 items (whichever comes first) before it forwards data to the configured exporters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing noise by filtering telemetry
&lt;/h3&gt;

&lt;p&gt;Telemetry data can be noisy. For example, frequent &lt;code&gt;DEBUG&lt;/code&gt; level logs are often useful in development but are often superfluous in production except when debugging an active issue. Let's add a bouncer to our pipeline to drop this noise at the source.&lt;/p&gt;

&lt;p&gt;We'll use the &lt;a href="https://www.dash0.com/guides/opentelemetry-filter-processor" rel="noopener noreferrer"&gt;filter processor&lt;/a&gt;, which lets you drop telemetry data using the powerful &lt;a href="https://www.dash0.com/guides/opentelemetry-transformation-language-ottl" rel="noopener noreferrer"&gt;OpenTelemetry Transformation Language (OTTL)&lt;/a&gt;. Let's say you wanted to drop all logs below the &lt;code&gt;INFO&lt;/code&gt; severity level, you can do so with the following modifications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# The filter processor lets you exclude telemetry data based on its attributes&lt;/span&gt;
  &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;log_record&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;severity_number &amp;lt; SEVERITY_NUMBER_INFO&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="c1"&gt;# The order is important. You want to drop data before batching it.&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any log with a severity number less than 9 (&lt;code&gt;INFO&lt;/code&gt;) will be dropped by the Collector and will never reach the &lt;code&gt;debug&lt;/code&gt; exporter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modifying and enriching telemetry data
&lt;/h3&gt;

&lt;p&gt;When you'd like to add, remove, or modify attributes in your telemetry data, there are a few general-purpose processors you can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.dash0.com/guides/opentelemetry-resource-processor" rel="noopener noreferrer"&gt;resource processor&lt;/a&gt;: For actions targeting &lt;a href="https://www.dash0.com/knowledge/what-are-opentelemetry-resources" rel="noopener noreferrer"&gt;resource-level attributes&lt;/a&gt; (e.g., &lt;code&gt;host.name&lt;/code&gt;, &lt;code&gt;service.name&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.dash0.com/guides/opentelemetry-attributes-processor" rel="noopener noreferrer"&gt;attributes processor&lt;/a&gt;: For manipulating attributes of individual logs, spans, or metric datapoints.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor" rel="noopener noreferrer"&gt;transform processor&lt;/a&gt;: The most powerful of the three, for performing complex transformations on any part of your telemetry data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some common use cases for these processors include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.dash0.com/guides/scrubbing-sensitive-data-with-opentelemetry" rel="noopener noreferrer"&gt;Redacting or removing sensitive information&lt;/a&gt; from telemetry before it leaves your systems.&lt;/li&gt;
&lt;li&gt;Enriching data by adding static attributes.&lt;/li&gt;
&lt;li&gt;Renaming or standardizing attributes to conform to semantic conventions across different services.&lt;/li&gt;
&lt;li&gt;Correcting malformed or misplaced data sent by older or misconfigured instrumentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's examine the structure of the OTLP log records being sent by the &lt;code&gt;otelgen&lt;/code&gt; tool once again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
Resource attributes:
     -&amp;gt; host.name: Str(node-1)
     -&amp;gt; k8s.container.name: Str(otelgen)
     -&amp;gt; k8s.namespace.name: Str(default)
     -&amp;gt; k8s.pod.name: Str(otelgen-pod-b9919c90)
     -&amp;gt; service.name: Str(otelgen)
LogRecord #0
ObservedTimestamp: 2025-07-03 16:01:40.264711241 +0000 UTC
Timestamp: 2025-07-03 16:01:40.264711041 +0000 UTC
SeverityText: Fatal
SeverityNumber: Fatal(21)
Body: Str(Log 1763: Fatal phase: finish)
Attributes:
     -&amp;gt; worker_id: Str(1763)
     -&amp;gt; service.name: Str(otelgen)
     -&amp;gt; trace_id: Str(a85d432127e63d667508563efd73af52)
     -&amp;gt; span_id: Str(34c07d59e6cfa2d9)
     -&amp;gt; trace_flags: Str(01)
     -&amp;gt; phase: Str(finish)
     -&amp;gt; http.method: Str(POST)
     -&amp;gt; http.status_code: Int(200)
     -&amp;gt; http.target: Str(/api/v1/resource/1763)
     -&amp;gt; k8s.pod.name: Str(otelgen-pod-b9919c90)
     -&amp;gt; k8s.namespace.name: Str(default)
     -&amp;gt; k8s.container.name: Str(otelgen)
Trace ID:
Span ID:
Flags: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are (at least) three issues here that deviate from the correct OpenTelemetry data model and &lt;a href="https://www.dash0.com/knowledge/otel-semantic-conventions-explainer" rel="noopener noreferrer"&gt;semantic conventions&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Misplaced trace context&lt;/strong&gt;: The &lt;code&gt;trace_id&lt;/code&gt;, &lt;code&gt;span_id&lt;/code&gt;, and &lt;code&gt;trace_flags&lt;/code&gt; values are incorrectly placed inside the &lt;code&gt;Attributes&lt;/code&gt; map, while the dedicated top-level &lt;code&gt;Trace ID&lt;/code&gt;, &lt;code&gt;Span ID&lt;/code&gt;, and &lt;code&gt;Flags&lt;/code&gt; fields are empty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Redundant attributes&lt;/strong&gt;: Resource attributes like &lt;code&gt;k8s.pod.name&lt;/code&gt; and &lt;code&gt;service.name&lt;/code&gt; are duplicated in the log record's &lt;code&gt;Attributes&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deprecated attributes&lt;/strong&gt;: HTTP attributes like &lt;code&gt;http.method&lt;/code&gt;, &lt;code&gt;http.target&lt;/code&gt;, and &lt;code&gt;http.status_code&lt;/code&gt; have all been deprecated in favor of newer attributes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;transform&lt;/code&gt; processor is the perfect tool for fixing these issues. Add the following modifications to your &lt;code&gt;otelcol.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;log_statements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Move trace context from attributes to the correct top-level fields&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;log&lt;/span&gt;
        &lt;span class="na"&gt;statements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;set(trace_id.string, attributes["trace_id"])&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;set(span_id.string, attributes["span_id"])&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;set(flags, Int(attributes["trace_flags"]))&lt;/span&gt;
      &lt;span class="c1"&gt;# Delete the original, now redundant, trace context attributes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;log&lt;/span&gt;
        &lt;span class="na"&gt;statements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "trace_id")&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "span_id")&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "trace_flags")&lt;/span&gt;
      &lt;span class="c1"&gt;# Delete the duplicated resource attributes from the log record's attributes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;log&lt;/span&gt;
        &lt;span class="na"&gt;statements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "k8s.pod.name")&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "k8s.namespace.name")&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "k8s.container.name")&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete_key(attributes, "service.name")&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Add the transform processor to the pipeline&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration uses OTTL statements to clean up the log records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;set(trace_id, ...)&lt;/code&gt;: This function takes the value from the &lt;code&gt;trace_id&lt;/code&gt; key within the attributes map and sets it as the top-level &lt;code&gt;Trace ID&lt;/code&gt; for the log record. The same logic applies to the other statements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;delete_key(attributes, ...)&lt;/code&gt;: After moving the values, this function removes the original keys from the attributes map to eliminate redundancy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can recreate the containers to see it in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--force-recreate&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you check the logs, you'll notice that the outgoing log data is now correctly formatted, smaller in size, and fully compliant with semantic conventions with the &lt;code&gt;Trace ID&lt;/code&gt; and &lt;code&gt;Span ID&lt;/code&gt; fields properly populated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-07-03T16:36:49.418Z     info    ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
Resource attributes:
     -&amp;gt; host.name: Str(node-1)
     -&amp;gt; k8s.container.name: Str(otelgen)
     -&amp;gt; k8s.namespace.name: Str(default)
     -&amp;gt; k8s.pod.name: Str(otelgen-pod-3efafa6f)
     -&amp;gt; service.name: Str(otelgen)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope otelgen
LogRecord #0
ObservedTimestamp: 2025-07-03 16:36:48.41161663 +0000 UTC
Timestamp: 2025-07-03 16:36:48.411616563 +0000 UTC
SeverityText: Error
SeverityNumber: Error(17)
Body: Str(Log 38: Error phase: finish)
Attributes:
     -&amp;gt; worker_id: Str(38)
     -&amp;gt; phase: Str(finish)
     -&amp;gt; url.path: Str(/api/v1/resource/340)
     -&amp;gt; http.response.status_code: Int(200)
     -&amp;gt; http.request.method: Str(GET)
Trace ID: 86713e2736d6f6a398047b9317b11398
Span ID: d06e86785766aa64
Flags: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ensuring resilience with the Memory Limiter
&lt;/h3&gt;

&lt;p&gt;An overloaded service could suddenly send a massive flood of data, overwhelming the Collector and causing it to run out of memory and crash. This would create a total visibility outage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz8dlybr209vas0bret5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz8dlybr209vas0bret5.png" alt="How the OpenTelemetry Collector Memory Limiter works" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.dash0.com/guides/opentelemetry-memory-limiter-processor" rel="noopener noreferrer"&gt;memory_limiter processor&lt;/a&gt; acts as a safety valve to prevent this. It monitors memory usage and starts rejecting data if it exceeds a configured limit, enforcing backpressure on the data source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="na"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="na"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# How often to check the collector's memory usage.&lt;/span&gt;
    &lt;span class="na"&gt;check_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
    &lt;span class="c1"&gt;# The hard memory limit in Mebibytes (MiB). If usage exceeds this,&lt;/span&gt;
    &lt;span class="c1"&gt;# the collector will start rejecting new data.&lt;/span&gt;
    &lt;span class="na"&gt;limit_mib&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;
    &lt;span class="c1"&gt;# A soft limit. When usage drops below this, the collector will&lt;/span&gt;
    &lt;span class="c1"&gt;# start accepting data again.&lt;/span&gt;
    &lt;span class="na"&gt;spike_limit_mib&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the &lt;code&gt;memory_limiter&lt;/code&gt; should come &lt;strong&gt;first&lt;/strong&gt; in your pipeline's processor list. If it's over the limit, you want to reject data immediately, before wasting CPU cycles on other processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling multiple signals with parallel pipelines
&lt;/h2&gt;

&lt;p&gt;So far, you've built a simple pipeline for processing logs. However, a key strength of the OpenTelemetry Collector is its ability to manage all signals simultaneously within a single instance. You can achieve this by defining parallel pipelines, one for each signal type, in the &lt;code&gt;service&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;Let's expand the configuration to also process traces. The goal is to receive traces from an application, batch them for efficiency, and then send them to a &lt;a href="https://www.dash0.com/knowledge/what-is-jaeger-tracing" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt; instance for visualization, while the existing &lt;code&gt;logs&lt;/code&gt; pipeline continues to operate independently (writing to the console as before).&lt;/p&gt;

&lt;p&gt;To send traces to Jaeger, you can use the OTLP exporter. You can provide an identifying name for any component using the &lt;code&gt;type/name&lt;/code&gt; syntax as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp/jaeger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jaeger:4317&lt;/span&gt; &lt;span class="c1"&gt;# The address of the Jaeger gRPC endpoint&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# Use TLS in production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now add a new pipeline to the service section specifically for traces. This pipeline will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reuse the same &lt;code&gt;otlp&lt;/code&gt; receiver we already defined.&lt;/li&gt;
&lt;li&gt;Reuse the &lt;code&gt;batch&lt;/code&gt; and &lt;code&gt;memory_limiter&lt;/code&gt; processors.&lt;/li&gt;
&lt;li&gt;Send its data to the new &lt;code&gt;otlp/jaeger&lt;/code&gt; exporter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the complete &lt;code&gt;service&lt;/code&gt; section showing both the &lt;code&gt;logs&lt;/code&gt; and &lt;code&gt;traces&lt;/code&gt; pipelines running in parallel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;traces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp/jaeger&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test this, you need to update your &lt;code&gt;docker-compose.yml&lt;/code&gt; to run a Jaeger instance and a second &lt;code&gt;otelgen&lt;/code&gt; service configured to generate traces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelcol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel/opentelemetry-collector-contrib:0.129.1&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otelcol&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./otelcol.yaml:/etc/otelcol-contrib/config.yaml&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;

  &lt;span class="na"&gt;jaeger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jaegertracing/all-in-one:1.71.0&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jaeger&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;16686:16686&lt;/span&gt;

  &lt;span class="na"&gt;otelgen-logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/krzko/otelgen:v0.5.2&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otelgen-logs&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--otel-exporter-otlp-endpoint"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;otelcol:4317"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--insecure"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logs"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;multi"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;otelcol&lt;/span&gt;

  &lt;span class="na"&gt;otelgen-traces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/krzko/otelgen:v0.5.2&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otelgen-traces&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--otel-exporter-otlp-endpoint"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;otelcol:4317"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--insecure"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--duration"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;86400"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traces"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;multi"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
      &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;otelcol&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otelnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice we now have two &lt;code&gt;otelgen&lt;/code&gt; services: &lt;code&gt;otelgen-logs&lt;/code&gt; sends logs as before, and &lt;code&gt;otelgen-traces&lt;/code&gt; sends traces to the same OTLP endpoint on our Collector.&lt;/p&gt;

&lt;p&gt;Recreate the containers with the updated configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--force-recreate&lt;/span&gt; &lt;span class="nt"&gt;--remove-orphans&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While the &lt;code&gt;otelcol&lt;/code&gt; logs will continue showing the processed logs, the easiest way to verify the traces pipeline is to check the Jaeger UI.&lt;/p&gt;

&lt;p&gt;Open your web browser and navigate to &lt;code&gt;http://localhost:16686&lt;/code&gt;. In the Jaeger UI, select &lt;code&gt;otelgen&lt;/code&gt; from the &lt;strong&gt;Service&lt;/strong&gt; dropdown menu and click &lt;strong&gt;Find Traces&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfdheqyjufu1w4t82tuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfdheqyjufu1w4t82tuf.png" alt="Find otelgen traces in Jaeger" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see a list of traces generated by the &lt;code&gt;otelgen-traces&lt;/code&gt; service, confirming that your new traces pipeline is successfully receiving, processing, and exporting trace data to Jaeger.&lt;/p&gt;

&lt;p&gt;With this setup, you have a single Collector instance efficiently managing two completely separate data flows, demonstrating the power and flexibility of defining multiple pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fanning out to multiple destinations
&lt;/h2&gt;

&lt;p&gt;A key advantage of the OpenTelemetry Collector is its ability to easily route telemetry to multiple destinations at once, a concept often called "&lt;a href="https://en.wikipedia.org/wiki/Fan-out_(software)" rel="noopener noreferrer"&gt;fanning out&lt;/a&gt;". This is done by simply adding more exporters to a pipeline's &lt;code&gt;exporters&lt;/code&gt; list.&lt;/p&gt;

&lt;p&gt;Let's demonstrate this by forwarding both our logs and traces to &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt;, an OpenTelemetry-native platform, in addition to the existing destinations.&lt;/p&gt;

&lt;p&gt;You'll need to &lt;a href="https://www.dash0.com/sign-up" rel="noopener noreferrer"&gt;sign up for a free trial&lt;/a&gt; first, find the &lt;strong&gt;OpenTelemetry Collector&lt;/strong&gt; integration, and copy your authentication token and Dash0 endpoint (OTLP via gRPC) into your configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# [...]&lt;/span&gt;
  &lt;span class="na"&gt;otlp/dash0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your_dash0_endpoint&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bearer &amp;lt;your_dash0_token&amp;gt;&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/dash0&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;traces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp/jaeger&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/dash0&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this change, both pipelines now fan out the processed data to the specified exporters. You'll see the data in your Dash0 dashboard as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b7waag897v5n2w71e11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4b7waag897v5n2w71e11.png" alt="Otelgen traces in Dash0" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e6q3q9w6ho29k9crq75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e6q3q9w6ho29k9crq75.png" alt="Otelgen logs in Dash0" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This capability allows you to experiment with multiple backends, migrate between vendors without downtime, or satisfy other uses for your telemetry without touching your application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chaining pipelines with connectors
&lt;/h2&gt;

&lt;p&gt;You can create powerful data processing flows by generating new telemetry signals from your existing data.&lt;/p&gt;

&lt;p&gt;This is possible with &lt;code&gt;connectors&lt;/code&gt;. A connector is a special component that acts as both an exporter for one pipeline and a receiver for another, allowing you to chain pipelines together.&lt;/p&gt;

&lt;p&gt;Let's demonstrate this by building a system that generates an error count metric from the &lt;code&gt;otelgen&lt;/code&gt; log data. The &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/countconnector" rel="noopener noreferrer"&gt;count connector&lt;/a&gt; is perfect for this.&lt;/p&gt;

&lt;p&gt;First, you'll need to define the count connector and configure it to create a metric named &lt;code&gt;log_error.count&lt;/code&gt; that increments every time it sees a log with a severity of &lt;code&gt;ERROR&lt;/code&gt; or higher:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;connectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;count/log_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;log_error.count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;count of errors logged&lt;/span&gt;
        &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;severity_number &amp;gt;= SEVERITY_NUMBER_ERROR&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this, go ahead and update your &lt;code&gt;service&lt;/code&gt; configuration to create a new &lt;code&gt;metrics&lt;/code&gt; pipeline. The &lt;code&gt;count/log_errors&lt;/code&gt; connector will serve as the bridge: it will be an &lt;strong&gt;exporter&lt;/strong&gt; for the &lt;code&gt;logs&lt;/code&gt; pipeline and a &lt;strong&gt;receiver&lt;/strong&gt; for the new &lt;code&gt;metrics&lt;/code&gt; pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="c1"&gt;# The connector is added as a destination for logs.&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/dash0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;count/log_errors&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;traces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp/jaeger&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/dash0&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# This new pipeline receives data exclusively from the connector.&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;count/log_errors&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;memory_limiter&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlphttp/dash0&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration is a game-changer because it allows you to derive new insights from existing data streams directly within the Collector. The data flow is now:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The &lt;code&gt;logs&lt;/code&gt; pipeline processes logs and sends a copy to the &lt;code&gt;count/log_errors&lt;/code&gt; connector.&lt;/li&gt;
&lt;li&gt; The &lt;code&gt;count/log_errors&lt;/code&gt; connector inspects these logs, generates a new &lt;code&gt;log_error.count&lt;/code&gt; metric based on our condition, and passes this metric along. sends them to your backend.&lt;/li&gt;
&lt;li&gt; The &lt;code&gt;metrics&lt;/code&gt; pipeline receives the newly generated metric, batches it, and sends them to your backend.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After relaunching the services, you'll see the new &lt;code&gt;log_error.count&lt;/code&gt; metric appear in your dashboard, all without adding a single line of metrics instrumentation code to your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmanztp2cgb9yr42ms3kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmanztp2cgb9yr42ms3kt.png" alt="Log error count metric in Dash0" width="800" height="726"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a basic example, but it demonstrates the power of a true pipeline architecture. The same principle can be used for more advanced scenarios, like using the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector" rel="noopener noreferrer"&gt;spanmetrics connector&lt;/a&gt; to automatically generate full RED metrics (request rates, error counts, and duration histograms) directly from your trace data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Collector distributions
&lt;/h2&gt;

&lt;p&gt;When you use the OpenTelemetry Collector, you're not running a single, monolithic application. Instead, you use a distribution: a specific binary packaged with a curated set of components (&lt;code&gt;receivers&lt;/code&gt;, &lt;code&gt;processors&lt;/code&gt;, &lt;code&gt;exporters&lt;/code&gt;, and &lt;code&gt;extensions&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This model exists to allow you to use a version of the Collector that is tailored to your specific needs or even create your own. There are three primary types of distributions you will encounter:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Official OpenTelemetry distributions
&lt;/h3&gt;

&lt;p&gt;The OpenTelemetry project maintains several official distributions. The two most common are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Core (&lt;code&gt;otelcol&lt;/code&gt;): This is a minimal, lightweight distribution that includes only the most essential and stable components. It provides a stable foundation but has limited functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contrib (&lt;code&gt;otelcol-contrib&lt;/code&gt;): This is the most comprehensive version, that includes almost every component from both the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector" rel="noopener noreferrer"&gt;core&lt;/a&gt; and &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib" rel="noopener noreferrer"&gt;contrib&lt;/a&gt; repositories. It is the recommended distribution for getting started, as it provides the widest range of capabilities for connecting to various sources and destinations without needing to build a custom version.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Vendor distributions
&lt;/h3&gt;

&lt;p&gt;Some observability vendors provide &lt;a href="https://opentelemetry.io/ecosystem/distributions/" rel="noopener noreferrer"&gt;their own Collector distributions&lt;/a&gt;. These are typically based on the &lt;code&gt;otelcol-contrib&lt;/code&gt; distribution but are pre-configured with the vendor's specific exporter and other recommended settings. Using a vendor distribution can simplify the process of sending data to that vendor's platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Custom distributions
&lt;/h3&gt;

&lt;p&gt;For production environments, &lt;strong&gt;the recommended practice is to build your own custom distribution&lt;/strong&gt;. This involves creating a lightweight, fit-for-purpose Collector binary that contains only the components you need.&lt;/p&gt;

&lt;p&gt;You can create a custom distribution using the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/tree/main/cmd/builder" rel="noopener noreferrer"&gt;OpenTelemetry Collector Builder&lt;/a&gt; (&lt;code&gt;ocb&lt;/code&gt;) tool. It involves creating a simple manifest file that lists the components you want to include, and then running the &lt;code&gt;ocb&lt;/code&gt; tool to compile your custom binary.&lt;/p&gt;

&lt;p&gt;You can learn more about building a custom Collector distribution by &lt;a href="(https://www.dash0.com/guides/custom-opentelemetry-collector)"&gt;reading this guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging and observing your pipeline
&lt;/h2&gt;

&lt;p&gt;A critical piece of infrastructure like your telemetry pipeline must itself be observable and easy to debug. If the Collector is dropping data, experiencing high latency, or is unhealthy, you definitely need to know about it.&lt;/p&gt;

&lt;p&gt;Fortunately, the Collector is instrumented out-of-the-box and provides several tools for validation and observation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validating your configuration
&lt;/h3&gt;

&lt;p&gt;Before deploying the Collector, you should always validate that your &lt;code&gt;config.yaml&lt;/code&gt; file is syntactically correct. The primary way to do this is with the &lt;code&gt;validate&lt;/code&gt; subcommand which checks the configuration file for errors without starting the full Collector service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;otelcol-contrib validate --config=otelcol.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the configuration is valid, the command will exit silently. If there are errors, it will print them to the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5boddse9z40ya4phe72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5boddse9z40ya4phe72.png" alt="OpenTelemetry collector validate command" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also use &lt;a href="https://www.otelbin.io/" rel="noopener noreferrer"&gt;OtelBin&lt;/a&gt; to visualize your pipeline, and validate it against various Collector distributions before you deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs699tlf9z636wt2jrhzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs699tlf9z636wt2jrhzt.png" alt="visualizing collector configuration through otelbin" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're complex OTTL statements in the transform or filter processor, you will also find the &lt;a href="https://ottl.run/" rel="noopener noreferrer"&gt;OTTL Playground&lt;/a&gt; to be a useful resource for understanding how different configurations impact the OTLP data transformation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live debugging
&lt;/h3&gt;

&lt;p&gt;When building a pipeline, you'll often need to inspect the data flowing through it in real-time. As you've already seen, the &lt;code&gt;debug&lt;/code&gt; exporter is primary way to do this.&lt;/p&gt;

&lt;p&gt;By adding it to any pipeline's &lt;code&gt;exporters&lt;/code&gt; list, you can print the full content of traces, metrics, or logs to the console, and verify that your &lt;code&gt;receivers&lt;/code&gt; and &lt;code&gt;processors&lt;/code&gt; are working as expected.&lt;/p&gt;

&lt;p&gt;For debugging the Collector components themselves, you can enable the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/tree/main/extension/zpagesextension" rel="noopener noreferrer"&gt;zPages extension&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;extensions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zpages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# default endpoint is localhost:55679&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;extensions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;zpages&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ... your pipelines&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the Collector is running, you can access several useful debugging pages in your browser, such as &lt;code&gt;/debug/pipelinez&lt;/code&gt; to view your pipeline components or &lt;code&gt;/debug/tracez&lt;/code&gt; to see recently sampled traces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faadqsdnl6m88rrxcg30t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faadqsdnl6m88rrxcg30t.png" alt="OpenTelemetry Collector zPages" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Observing the Collector's internal telemetry
&lt;/h3&gt;

&lt;p&gt;In production environments, you'll need to monitor the Collector's health and performance over time. This is configured under the &lt;code&gt;service.telemetry&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;By default, the Collector sends its own internal logs to &lt;code&gt;stderr&lt;/code&gt;, and its often the first place you'll check when there's a problem with your pipeline. For metrics, the Collector can expose its own data in a Prometheus-compatible format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;telemetry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;readers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pull&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;exporter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0"&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8888&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtgar6kai0scbbupdns2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtgar6kai0scbbupdns2.png" alt="OpenTelemetry collector metrics" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now scrape this endpoint with a Prometheus instance to monitor key health indicators like &lt;code&gt;otelcol_exporter_send_failed_spans_total&lt;/code&gt;, &lt;code&gt;otelcol_processor_batch_send_size&lt;/code&gt;, and &lt;code&gt;otelcol_receiver_accepted_spans&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can also push the metrics to an OTLP-compatible backend using the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otelcol.yaml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;telemetry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;readers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;periodic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;exporter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http/protobuf&lt;/span&gt;
                &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://backend:4318&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details, see the official documentation on &lt;a href="https://opentelemetry.io/docs/collector/internal-telemetry/" rel="noopener noreferrer"&gt;Collector telemetry&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going to production: Collector deployment patterns
&lt;/h2&gt;

&lt;p&gt;How you run the Collector in production is a critical architectural decision. Your deployment strategy affects the scalability, security, and resilience of your entire observability setup. The two fundamental roles a Collector can play are that of an &lt;strong&gt;agent&lt;/strong&gt; or a &lt;strong&gt;gateway&lt;/strong&gt;, which can be combined into several common patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Agent-only deployment
&lt;/h3&gt;

&lt;p&gt;The simplest pattern is to deploy a Collector agent on every host or as a sidecar to every application pod. In this model, each agent is responsible for collecting, processing, and exporting telemetry directly to one or more backends.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Application → OpenTelemetry Collector (Agent) → Observability Backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach is easy to start with but it offers limited durability, as agents typically buffer in memory, meaning a single node failure can lead to data loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Agent and gateway deployment
&lt;/h3&gt;

&lt;p&gt;A more robust production pattern enhances the agent deployment with a new, centralized gateway layer. In this model, the agent's role is simplified: it handles local collection and metadata enrichment before forwarding all telemetry to the gateway.&lt;/p&gt;

&lt;p&gt;This gateway is a standalone, centralized service consisting of one or more Collector instances that receive telemetry from all agents. It's the ideal place for heavy processing like PII scrubbing, filtering, and tail-based sampling, which ensures rules are applied consistently before data leaves your environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Application → Collector (Agent) → Collector (Gateway) → Observability Backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This layered approach provides the best of both worlds: agents handle local collection and metadata enrichment efficiently, while the gateway provides centralized control, security, and processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-scale deployment with a message queue
&lt;/h3&gt;

&lt;p&gt;When you're dealing with massive data volumes or require extreme durability, the standard pattern is to introduce an event queue (like &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt;) between your agents and a fleet of Collectors that act as consumers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Application → Collector (Agent) → Message Queue → Collector (Aggregator) → Backend(s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://blog.cloudflare.com/an-overview-of-cloudflares-logging-pipeline/#kafka" rel="noopener noreferrer"&gt;This pattern provides two advantages&lt;/a&gt;. The message queue acts as a massive buffer for durability; even if the aggregator fleet is down, agents can continue sending data to the queue, preventing data loss.&lt;/p&gt;

&lt;p&gt;It also provides load-leveling by decoupling the agents from the aggregators, which smooths out traffic spikes and allows the aggregators to consume data at a steady rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  You're now a pipeline architect
&lt;/h2&gt;

&lt;p&gt;We've journeyed from a simple data pass-through to a powerful, multi-stage pipeline that enriches, filters, and routes telemetry data, even generating new, valuable signals along the way.&lt;/p&gt;

&lt;p&gt;By adopting the pipeline mindset, you gain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Centralized control&lt;/strong&gt;, allowing you to manage your entire telemetry flow from one place.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vendor neutrality&lt;/strong&gt; so you can swap backends with a simple config change or use multiple vendors at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency and cost savings&lt;/strong&gt; by applying consistent filtering policies across your entire environment which ultimately reduces your observability bill.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced security&lt;/strong&gt; by scrubbing sensitive data before it ever leaves your infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Powerful capabilities&lt;/strong&gt; using advanced patterns like metric generation that would be complex or impossible otherwise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To complete the picture, you need to send that data to an OpenTelemetry-native observability platform like &lt;a href="https://dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt; that makes it easy to quickly move from raw data to insight.&lt;/p&gt;

&lt;p&gt;For more on the Collector itself, &lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;refer to the official documentation&lt;/a&gt;. Thanks for reading!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>opentelemetry</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Production-Grade Python Logging Made Easier with Loguru</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Wed, 05 Nov 2025 09:19:47 +0000</pubDate>
      <link>https://forem.com/dash0/production-grade-python-logging-made-easier-with-loguru-3bbj</link>
      <guid>https://forem.com/dash0/production-grade-python-logging-made-easier-with-loguru-3bbj</guid>
      <description>&lt;p&gt;Logs are often the first place you look when something goes wrong in production. They're your running commentary of what the application is doing and why, and when it fails to do what it's supposed to.&lt;/p&gt;

&lt;p&gt;While the &lt;a href="https://www.dash0.com/guides/logging-in-python#centralizing-your-python-logs" rel="noopener noreferrer"&gt;standard &lt;code&gt;logging&lt;/code&gt; module&lt;/a&gt; can be configured to produce high-quality telemetry, achieving this requires significant boilerplate: custom formatters, filters, handlers, and complex YAML configurations. It's powerful, but it's not simple.&lt;/p&gt;

&lt;p&gt;So what if you could achieve the same structured, contextual, and production-ready logging with a fraction of the complexity?&lt;/p&gt;

&lt;p&gt;This is the promise of &lt;a href="https://github.com/Delgan/loguru" rel="noopener noreferrer"&gt;Loguru&lt;/a&gt;. It's a logging library designed from the ground up to replace the cumbersome setup of the standard library with a simple, unified API that supports modern observability practices.&lt;/p&gt;

&lt;p&gt;This guide will directly address the patterns and pain points of the standard &lt;code&gt;logging&lt;/code&gt; module and show how Loguru simplifies them without compromising on effectiveness and flexibility.&lt;/p&gt;

&lt;p&gt;By the end, you will have a lean, modern logging setup that feels natural to use and is ready for production.&lt;/p&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Loguru philosophy
&lt;/h2&gt;

&lt;p&gt;With Python's built-in &lt;code&gt;logging&lt;/code&gt; module, the common pattern is to configure everything once by setting up your handlers, formatters, and filters, and then, in each module, grab a namespaced logger with &lt;code&gt;logging.getLogger(__name__)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's a solid system, but the initial configuration can feel heavy: multiple objects to wire together, YAML or &lt;code&gt;dictConfig&lt;/code&gt; to maintain, and a lot of moving parts if you just want to get clean logs quickly.&lt;/p&gt;

&lt;p&gt;Loguru takes a far simpler approach. Instead of asking you to set up a hierarchy of loggers, it gives you &lt;em&gt;one&lt;/em&gt; ready-to-go logger that you just import &lt;a href="https://pypi.org/project/loguru/" rel="noopener noreferrer"&gt;(after you install it first&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, Loguru!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. The &lt;code&gt;logger&lt;/code&gt; you imported is configured to print colorized messages to the &lt;code&gt;stderr&lt;/code&gt;, complete with timestamps, log levels, module names, and line numbers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-09-02 13:53:03.686 | INFO     | __main__:&amp;lt;module&amp;gt;:3 - Hello, Loguru!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this is great for local development and is easy to read at a glance, especially with the colors, &lt;a href="https://www.dash0.com/guides/structured-logging-for-modern-applications" rel="noopener noreferrer"&gt;you'll want something more machine-friendly&lt;/a&gt; in production environments such as JSON.&lt;/p&gt;

&lt;p&gt;Start by removing the default &lt;code&gt;stderr&lt;/code&gt; handler so you can make a fresh start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# remove the default configuration
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add your own &lt;em&gt;sink&lt;/em&gt; with the exact behavior you want. In Loguru, a &lt;em&gt;sink&lt;/em&gt; is simply a destination for your logs. It could be the standard output, a file path, a custom function, or even a &lt;a href="https://docs.python.org/3/library/logging.handlers.html#sysloghandler" rel="noopener noreferrer"&gt;logging.Handler&lt;/a&gt; from the standard library.&lt;/p&gt;

&lt;p&gt;Instead of wiring up separate handlers, formatters, and filters, you'll configure everything in one place with a single call to &lt;code&gt;logger.add()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single &lt;code&gt;add()&lt;/code&gt; call completely defines the sink: where logs go, how they're formatted, and which levels get through. The &lt;code&gt;serialize&lt;/code&gt; argument is what causes the output to be formatted as a JSON object, which looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02 17:08:04.498 | INFO     | __main__:&amp;lt;module&amp;gt;:8 - Application started&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"record"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"elapsed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"repr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0:00:00.004885"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"seconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.004885&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"exception"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"extra"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"file"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main.py"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/Users/ayo/dev/dash0/demo/loguru-demo/main.py"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"function"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;module&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"icon"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ℹ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"no"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"line"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Application started"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"module"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"__main__"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"process"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65239&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MainProcess"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"thread"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8437194496&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MainThread"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"repr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02 17:08:04.498902+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1756825684.498902&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This structure is deliberately rich, and it includes everything Loguru knows about the log event, including timestamps, process and thread IDs, the module and line number, even elapsed time since the program started.&lt;/p&gt;

&lt;p&gt;If that feels too heavy for your needs, you don't have to stick with the default. Loguru lets you provide a custom serializer function to control exactly how log records are turned into JSON. That way, you can keep the fields that matter and drop the rest.&lt;/p&gt;

&lt;p&gt;To do this reliably, you need a two-step process to avoid conflicts with Loguru's internal formatter. First, you'll define a function that serializes your log record into the desired JSON format. Second, you'll use Loguru's &lt;code&gt;patch()&lt;/code&gt; method to add this JSON string as a new field to the record. Finally, you'll tell the sink to only output that new field through &lt;code&gt;format&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;traceback&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;subset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# Merge extra fields directly into the top-level dict
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extra&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extra&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="n"&gt;exc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traceback&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;traceback&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format_exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;traceback&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;patching&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;serialized&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;patching&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{serialized}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Application started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produces a much slimmer JSON log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02T17:58:09.538713+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Application started"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that all following examples in this guide will assume that you're using this custom serializer. For further customization, you can find all the available &lt;code&gt;record&lt;/code&gt; fields in the &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger" rel="noopener noreferrer"&gt;Loguru documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xfi4etgsvpgzzw9s1s2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xfi4etgsvpgzzw9s1s2.png" alt="Screenshot of Loguru record fields" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How log levels work in Loguru
&lt;/h2&gt;

&lt;p&gt;Just like the standard &lt;code&gt;logging&lt;/code&gt; module, Loguru uses &lt;a href="https://www.dash0.com/knowledge/log-levels" rel="noopener noreferrer"&gt;log levels&lt;/a&gt; to annotate the severity and control the verbosity of your logs. Levels let you decide which messages are worth keeping and which ones to ignore, especially once your application is running in production.&lt;/p&gt;

&lt;p&gt;Out of the box, Loguru supports the familiar set of levels:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Numeric value&lt;/th&gt;
&lt;th&gt;Typical use case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TRACE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Ultra-verbose debugging (finer than &lt;code&gt;DEBUG&lt;/code&gt;).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEBUG&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Detailed diagnostics for developers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;INFO&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Normal application events.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SUCCESS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;A Loguru-specific level, for happy paths.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;WARNING&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;td&gt;Potential problems worth attention.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ERROR&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;An operation failed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CRITICAL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;Severe failures, system at risk.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice that Loguru adds two extra levels compared to the standard library:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;code&gt;TRACE&lt;/code&gt; for when even &lt;code&gt;DEBUG&lt;/code&gt; isn't enough.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;SUCCESS&lt;/code&gt; as a positive counterpart to &lt;code&gt;WARNING&lt;/code&gt; or which could be handy for marking milestones.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can log at these levels using corresponding methods on the logger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Function entered&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fetching user details&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Application started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Background job completed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Cache miss&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Database update failed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;critical&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Out of memory!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you ever need to create a custom level, you can do so with the &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger.level" rel="noopener noreferrer"&gt;level() method&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;level&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FATAL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;red&amp;gt;&amp;lt;bold&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;icon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;!!!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can use the generic &lt;code&gt;log()&lt;/code&gt; method and provide the custom level's name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logger.log("FATAL", "Out of memory!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Controlling which logs appear
&lt;/h3&gt;

&lt;p&gt;Every sink that's added with &lt;code&gt;logger.add()&lt;/code&gt; can be given a minimum level. Messages below that threshold are dropped before they're written. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WARNING&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Debug message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;# ignored
&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Informational&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="c1"&gt;# ignored
&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A warning&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="c1"&gt;# logged
&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;An error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;         &lt;span class="c1"&gt;# logged
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A common pattern is using environment variables to set the level so you can bump verbosity up or down without touching the code.&lt;/p&gt;

&lt;p&gt;Loguru doesn't read environment variables automatically, but you can grab the value yourself (with &lt;code&gt;os.getenv&lt;/code&gt;) and feed it into &lt;code&gt;logger.add()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LOG_LEVEL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;upper&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can control logging through the &lt;code&gt;LOG_LEVEL&lt;/code&gt; variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_LEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;WARNING python main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Effortless contextual logging
&lt;/h2&gt;

&lt;p&gt;A common pain point with logging is the lack of context. A message like "&lt;em&gt;Failed to update record&lt;/em&gt;" doesn't tell you much on its own. Which record? For which user? During which request? Without those details, you're left guessing.&lt;/p&gt;

&lt;p&gt;With Loguru, you don't have to cram all of that context into the message string itself. Every logging call can include extra key--value pairs, and they'll automatically be attached to the record.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to update record&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usr-1234&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rec-9876&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming you're using the custom JSON serializer shown earlier, you will observe the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02T19:23:17.150211+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Failed to update record"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr-1234"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"record_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rec-9876"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach works fine for adding one-off contextual fields to a log record. But if you need certain dynamic per-request fields (like a correlation ID) to appear across multiple fields, it would be tedious to pass this information around manually.&lt;/p&gt;

&lt;p&gt;That's where Loguru's &lt;code&gt;bind()&lt;/code&gt; method comes in. With it, you can attach context to a logger once, and it will automatically carry through to every log call that uses it.&lt;/p&gt;

&lt;p&gt;Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;request_logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;req-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usr-1234&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;request_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fetching user profile&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;request_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to update record&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both of the resulting log entries will include the &lt;code&gt;request_id&lt;/code&gt; and &lt;code&gt;user_id&lt;/code&gt; without you having to repeat them each time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02T20:26:37.400265+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Fetching user profile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"req-42"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr-1234"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-02T20:26:37.400330+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Failed to update record"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"req-42"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usr-1234"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While &lt;code&gt;bind()&lt;/code&gt; works well for adding context to logs in the same scope, it's often more useful to attach context for the duration of a block of code, such as the lifetime of an HTTP request in a web app.&lt;/p&gt;

&lt;p&gt;That's where &lt;code&gt;logger.contextualize()&lt;/code&gt; comes in. It's a context manager that pushes values into the logging context when you enter the block, and automatically removes them when you exit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="c1"&gt;# ...existing logging configuration
&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_request_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;start_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;request_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Request-ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uuid4&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
    &lt;span class="n"&gt;client_ip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;user_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contextualize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Incoming {method} request to {path}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;duration_ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monotonic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start_time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;

        &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ERROR&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WARNING&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{method} {path} completed with status {status} in {duration:.2f} ms&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;duration_ms&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/users/{user_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_user&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contextualize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User profile request received.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uvicorn&lt;/span&gt;

    &lt;span class="n"&gt;uvicorn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main:app&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nb"&gt;reload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;access_log&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;contextualize(),&lt;/code&gt; you only need to declare the context once at the start of the request. The middleware guarantees that every log line for that request will have the right identifying fields, and they'll disappear as soon as the request finishes.&lt;/p&gt;

&lt;p&gt;The result is clean, consistent, and scoped contextual logging which is exactly what you need to correlate events in production without cluttering your log calls.&lt;/p&gt;

&lt;p&gt;Once you send some requests to that route, you'll notice how each log includes the same &lt;code&gt;request_id&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Incoming GET request to /users/12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"598e2e5f-e33d-4f05-a658-0a8287d766a6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/users/12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"client_ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl/8.7.1"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User profile request received."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"598e2e5f-e33d-4f05-a658-0a8287d766a6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET /users/12 completed with status 200 in 1.80 ms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"598e2e5f-e33d-4f05-a658-0a8287d766a6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/users/12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.7961669946089387&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Error and exception logging with Loguru
&lt;/h2&gt;

&lt;p&gt;The simplest way to capture errors in Loguru is by calling &lt;code&gt;logger.error()&lt;/code&gt; and including the exception details if you're in an &lt;code&gt;except&lt;/code&gt; block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Something went wrong: {}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will log the error message (division by zero), but not the full traceback which is a crucial piece of context that'll help you debug the issue.&lt;/p&gt;

&lt;p&gt;To capture the full exception, you'll need to use the &lt;code&gt;logger.exception()&lt;/code&gt; method. It exposes a richer &lt;code&gt;record["exception"]&lt;/code&gt; object which includes the error type, value, and complete Python traceback object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Something went wrong: {}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This logs the message at &lt;code&gt;ERROR&lt;/code&gt; level and includes the full traceback. If you're using our custom JSON serializer, you'll see the fields in a structured form:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T07:55:37.157368+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Something went wrong: division by zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"exception"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ZeroDivisionError"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"division by zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"traceback"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Traceback (most recent call last):&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/Users/ayo/dev/dash0/demo/loguru-demo/main.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 50, in &amp;lt;module&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    1 / 0&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    ~~^~~&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"ZeroDivisionError: division by zero&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might also see a colorized traceback in your terminal as follows despite serializing to JSON:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdp1u6qti1icqkuxqd3mb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdp1u6qti1icqkuxqd3mb.png" alt="Exception with colorized traceback" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This happens because Loguru tries to be helpful: if a log record contains an exception but the sink's &lt;code&gt;format&lt;/code&gt; string doesn't explicitly handle it (with &lt;code&gt;{exception}&lt;/code&gt;), Loguru appends the formatted traceback by default.&lt;/p&gt;

&lt;p&gt;To fix this, you can use a custom function to define the format of the logs as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;custom_formatter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{serialized}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;custom_formatter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though we're simply returning the same &lt;code&gt;{serialized}&lt;/code&gt; format, using the &lt;code&gt;custom_formatter()&lt;/code&gt; function tells Loguru to output &lt;em&gt;exactly&lt;/em&gt; and &lt;em&gt;only&lt;/em&gt; the content of our pre-formatted JSON string, so that the colorized error log will no longer appear in the console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the &lt;code&gt;catch()&lt;/code&gt; decorator
&lt;/h3&gt;

&lt;p&gt;For handled or unhandled exceptions, you can use the &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger.catch" rel="noopener noreferrer"&gt;&lt;code&gt;@logger.catch&lt;/code&gt; decorator&lt;/a&gt; or &lt;code&gt;with logger.catch():&lt;/code&gt; context manager. It automatically catches any exception, logs it with a full stack trace, and then re-raises it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="nd"&gt;@logger.catch&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;divide&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;

&lt;span class="nf"&gt;divide&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produces a formatted and informative traceback as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T08:48:20.779242+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"An error has been caught in function '&amp;lt;module&amp;gt;', process 'MainProcess' (34502), thread 'MainThread' (8437194496):"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"exception"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ZeroDivisionError"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"division by zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"traceback"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Traceback (most recent call last):&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/Users/ayo/dev/dash0/demo/loguru-demo/.venv/lib/python3.13/site-packages/loguru/_logger.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 1297, in catch_wrapper&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    return function(*args, **kwargs)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/Users/ayo/dev/dash0/demo/loguru-demo/main.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 44, in divide&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    return a / b&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;           ~~^~~&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"ZeroDivisionError: division by zero&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The &lt;code&gt;diagnose&lt;/code&gt; and &lt;code&gt;backtrace&lt;/code&gt; parameters
&lt;/h3&gt;

&lt;p&gt;One of Loguru's most powerful debugging features is its ability to show the values of local variables directly inside the stack trace. Enabling &lt;code&gt;diagnose=True&lt;/code&gt; when adding a sink tells Loguru to include this extra detail, making it far easier to understand why a line of code failed.&lt;/p&gt;

&lt;p&gt;This feature is fantastic during development but should never be used in production. With &lt;code&gt;diagnose=True&lt;/code&gt;, &lt;a href="https://www.dash0.com/guides/scrubbing-sensitive-data-with-opentelemetry" rel="noopener noreferrer"&gt;sensitive information such as passwords, tokens, or personal data&lt;/a&gt; can easily end up in your logs. Always disable it in production by setting &lt;code&gt;diagnose=False&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For production you'll also want to keep tracebacks focused on your own code. Setting &lt;code&gt;backtrace=False&lt;/code&gt; trims away the noise from deep library internals, leaving you with a concise and readable stack trace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;diagnose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Avoid leaking sensitive data
&lt;/span&gt;    &lt;span class="n"&gt;backtrace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;  &lt;span class="c1"&gt;# Show only relevant frames
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Improving logging performance
&lt;/h2&gt;

&lt;p&gt;Writing to files or sending logs over the network is I/O-heavy, and every call to log forces the calling thread to wait until the operation completes. Multiply that across many threads, and those small pauses can add up to real slowdowns.&lt;/p&gt;

&lt;p&gt;Loguru ensures that messages stay clean and consistent even in these scenarios. &lt;a href="https://loguru.readthedocs.io/en/stable/overview.html#asynchronous-thread-safe-multiprocess-safe" rel="noopener noreferrer"&gt;All sinks are thread-safe by default&lt;/a&gt; so that when multiple threads log to the same resource, Loguru uses internal locks so that each message is written fully before the next begins. This prevents overlapping or corrupted log lines without any extra work on your part.&lt;/p&gt;

&lt;p&gt;If you want to remove even that small blocking cost, &lt;strong&gt;the fix is to make logging asynchronous&lt;/strong&gt;. Instead of writing directly, worker threads push log records into an in-memory queue and continue immediately. A background thread pulls from the queue and performs the slow I/O, so your main application is never delayed.&lt;/p&gt;

&lt;p&gt;Loguru makes this pattern effortless. Just pass &lt;code&gt;enqueue=True&lt;/code&gt; when you add a sink, and it automatically sets up the queue and background worker for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file.log&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;enqueue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;  &lt;span class="c1"&gt;# non-blocking and safe across threads/processes
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But when your application is shutting down, there may still be messages sitting in that queue that haven't been written yet. If the program exits immediately, those messages will be lost.&lt;/p&gt;

&lt;p&gt;That's where &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger.complete" rel="noopener noreferrer"&gt;&lt;code&gt;logger.complete()&lt;/code&gt;&lt;/a&gt; comes in. It flushes the queue by waiting until all enqueued log records are processed, and then stops the background worker cleanly.&lt;/p&gt;

&lt;p&gt;It can be called from both synchronous or asynchronous code (with &lt;code&gt;await&lt;/code&gt;). The typical use case is at shutdown, or right before your process exits, when you want to make sure that all logs have been written out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.on_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shutdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;shutdown_event&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;complete&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Lazy evaluation of expensive functions
&lt;/h3&gt;

&lt;p&gt;Sometimes you want to log verbose or expensive details in development, but avoid paying the cost of computing them in production. &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger.opt" rel="noopener noreferrer"&gt;Loguru's &lt;code&gt;opt(lazy=True)&lt;/code&gt; method&lt;/a&gt; makes this possible by only evaluating values if the log message actually passes the sink's level filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;expensive_function&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Simulate something costly
&lt;/span&gt;    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;opt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lazy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Expensive result: {x}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;expensive_function&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, if the configured sink is set to &lt;code&gt;INFO&lt;/code&gt; or higher, &lt;code&gt;expensive_function()&lt;/code&gt; is never called. The purpose of the &lt;code&gt;lambda&lt;/code&gt; is to defer execution, so that Loguru can decide at runtime whether to actually evaluate it.&lt;/p&gt;

&lt;p&gt;Beyond lazy evaluation, &lt;code&gt;opt()&lt;/code&gt; also provides other per-message tweaks for handling stack traces, formatting, and context when you need them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing logs to files
&lt;/h2&gt;

&lt;p&gt;While the 12-Factor App methodology &lt;a href="https://12factor.net/logs" rel="noopener noreferrer"&gt;recommends logging to standard output&lt;/a&gt;, many deployments do require file-based logging. &lt;code&gt;Loguru&lt;/code&gt; has powerful, built-in &lt;a href="https://loguru.readthedocs.io/en/stable/api/logger.html#loguru._logger.Logger.add" rel="noopener noreferrer"&gt;rotation and retention mechanisms&lt;/a&gt; that are trivial to configure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_app.log&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;rotation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;50 MB&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# Rotates when the file reaches 50 MB
&lt;/span&gt;    &lt;span class="n"&gt;retention&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5 days&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;# Keeps logs for 5 days
&lt;/span&gt;    &lt;span class="n"&gt;compression&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zip&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# Compresses old log files
&lt;/span&gt;    &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;enqueue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, it is often a better practice to offload log rotation to a dedicated system utility like &lt;a href="https://www.dash0.com/guides/log-rotation-linux-logrotate" rel="noopener noreferrer"&gt;logrotate&lt;/a&gt; so that application concerns are cleanly separated from operational concerns. This means you would simply log to a file and let &lt;code&gt;logrotate&lt;/code&gt; handle the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disabling Loguru in tests
&lt;/h2&gt;

&lt;p&gt;When running automated tests, logs are often more distracting than helpful as they clutter the output, and make failures harder to read. Most of the time you either want logging completely disabled or reduced to only critical errors.&lt;/p&gt;

&lt;p&gt;Because all logging goes through sinks, you can control output globally by adding or removing them in your test configuration. The most straightforward approach is to remove all sinks at the start of your tests. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pytest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="nd"&gt;@pytest.fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;autouse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;disable_loguru&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Remove all configured sinks so nothing is printed during tests
&lt;/span&gt;    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this fixture, every test runs with Loguru disabled by default. If you'd rather keep Loguru active but silence its output by redirecting logs to an in-memory buffer instead of the console or a file, you can use the following instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;

&lt;span class="nd"&gt;@pytest.fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;autouse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;swallow_loguru&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StringIO&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;  &lt;span class="c1"&gt;# Logs go here but never reach stdout
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Redirecting standard logging into Loguru
&lt;/h2&gt;

&lt;p&gt;Frameworks, libraries, and dependencies all bring their own loggers, and nearly all of them use Python's standard &lt;code&gt;logging&lt;/code&gt; module. The result is a flood of messages that don't match your application's formatting, don't benefit from your structured JSON output, and can quickly overwhelm you with noise unless carefully managed.&lt;/p&gt;

&lt;p&gt;Instead of configuring dozens of different loggers by hand, you can redirect every standard logging call into your Loguru pipeline. You do this by configuring an &lt;code&gt;InterceptHandler&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;

&lt;span class="c1"&gt;# [...your existing Loguru configuration]
&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;InterceptHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Get corresponding Loguru level if it exists
&lt;/span&gt;        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;level&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;levelname&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;levelno&lt;/span&gt;

        &lt;span class="c1"&gt;# Find caller from where originated the logged message
&lt;/span&gt;        &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;depth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;currentframe&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;f_code&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;co_filename&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__file__&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;f_back&lt;/span&gt;
            &lt;span class="n"&gt;depth&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;opt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;depth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exc_info&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMessage&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;# This line intercepts all logs from the standard logging module
&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basicConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handlers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;InterceptHandler&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;force&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Standard library logging intercepted. All logs will now be handled by Loguru.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this handler in place, logs from the &lt;code&gt;logging&lt;/code&gt; module will be captured by Loguru and formatted just like your application's own logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T11:27:43.693500+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Standard library logging intercepted. All logs will now be handled by Loguru."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a comprehensive guide on switching from the standard &lt;code&gt;logging&lt;/code&gt; module, see the &lt;a href="https://loguru.readthedocs.io/en/stable/resources/migration.html" rel="noopener noreferrer"&gt;Loguru migration documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing your Python logs into an observability pipeline
&lt;/h2&gt;

&lt;p&gt;Once your Python services are emitting well structured and context-rich logs with Loguru, the next step is to move them beyond local storage, and into a &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;centralized observability pipeline&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Centralizing your logs lets you search across services, build dashboards, and trigger alerts. Even more importantly, they can be correlated with other signals &lt;a href="https://www.dash0.com/knowledge/logs-metrics-and-traces-observability" rel="noopener noreferrer"&gt;like metrics and traces&lt;/a&gt; to give you a complete picture of your system's health and behavior.&lt;/p&gt;

&lt;p&gt;Modern observability platforms like &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt; can ingest the JSON output you configure with Loguru's custom serializer. Once ingested, those logs can be filtered, aggregated, and visualized just like any other telemetry stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxptje0n0bgcrfrrezmr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxptje0n0bgcrfrrezmr3.png" alt="Logs in Dash0" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What about OpenTelemetry?
&lt;/h3&gt;

&lt;p&gt;This is where we must address a significant trade-off with Loguru. Unlike the standard &lt;code&gt;logging&lt;/code&gt; module, Loguru does not have an official, first-party integration with OpenTelemetry. This means that trace context (if available) is not propagated into your logs automatically.&lt;/p&gt;

&lt;p&gt;However, you can build this bridge manually. The correct approach is to access the active trace context from OpenTelemetry within your application and inject it into the Loguru logger. This gives you the correlation you need for true observability.&lt;/p&gt;

&lt;p&gt;Here is a practical example using a FastAPI middleware. This middleware will automatically grab the current &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;span_id&lt;/code&gt; and add them to the logging context for the duration of the request.&lt;/p&gt;

&lt;p&gt;First, ensure you have the necessary OpenTelemetry packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;opentelemetry-api opentelemetry-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then update your application code as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;traceback&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;loguru&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TracerProvider&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace.export&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConsoleSpanExporter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SimpleSpanProcessor&lt;/span&gt;

&lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TracerProvider&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;add_span_processor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nc"&gt;SimpleSpanProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ConsoleSpanExporter&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;subset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# Merge extra fields directly into the top-level dict
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extra&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extra&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="n"&gt;exc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exception&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traceback&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;traceback&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format_exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;traceback&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;patching&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;serialized&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;patching&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;custom_formatter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{serialized}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;custom_formatter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;otel_logging_middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Start a new span for the incoming request
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http_request&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Get the current trace and span IDs
&lt;/span&gt;        &lt;span class="n"&gt;span_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_span_context&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;trace_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;span_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;032&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="n"&gt;span_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;span_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;span_id&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;016&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

        &lt;span class="c1"&gt;# Add IDs to the logging context for the duration of the request
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contextualize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;span_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;span_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Request started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;call_next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Request finished&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/users/{user_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_user&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fetching user profile for {user_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello from Kigali!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uvicorn&lt;/span&gt;
    &lt;span class="n"&gt;uvicorn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nb"&gt;reload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;access_log&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when you run this application and make a request to &lt;code&gt;/users/123&lt;/code&gt;, every log message generated during that request will automatically be enriched with the &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;span_id&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T14:05:25.853738+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Request started"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"14a89eba2e2232303a467ff70d8dc584"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"span_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"53550a871addc2b5"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T14:05:25.854829+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Fetching user profile for 12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"14a89eba2e2232303a467ff70d8dc584"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"span_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"53550a871addc2b5"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-03T14:05:25.855016+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Request finished"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"14a89eba2e2232303a467ff70d8dc584"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"span_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"53550a871addc2b5"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this manual setup requires more boilerplate than using a library with native OTel support, it is a robust pattern that makes your Loguru logs truly production-grade and fully integrated into a modern observability stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Logging is often treated as an afterthought, but in production it is one of the most important windows into what your services are really doing.&lt;/p&gt;

&lt;p&gt;Python's standard library &lt;code&gt;logging&lt;/code&gt; module is flexible but verbose, often requiring layers of handlers, formatters, and filters before it produces something useful.&lt;/p&gt;

&lt;p&gt;Loguru takes a different approach. By collapsing that complexity into a single logger object with the powerful &lt;code&gt;add()&lt;/code&gt; method, it makes advanced logging accessible with just a few lines of code.&lt;/p&gt;

&lt;p&gt;Features like structured JSON output, contextual logging, exception handling, and non-blocking sinks give you production-grade logging without the boilerplate.&lt;/p&gt;

&lt;p&gt;Of course, Loguru isn't a silver bullet. It currently lacks first-class OpenTelemetry support, and you may still need to bridge with the standard logging module to capture logs from third-party libraries. Even so, its simplicity and flexibility make it an excellent choice for modern Python applications.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>python</category>
      <category>logging</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>9 Logging Best Practices You Should Know</title>
      <dc:creator>Ayooluwa Isaiah</dc:creator>
      <pubDate>Tue, 14 Oct 2025 12:40:30 +0000</pubDate>
      <link>https://forem.com/dash0/a-programmers-guide-to-logging-best-practices-25m8</link>
      <guid>https://forem.com/dash0/a-programmers-guide-to-logging-best-practices-25m8</guid>
      <description>&lt;p&gt;We continue to build increasingly complex, distributed systems, yet we often diagnose them with little more than glorified &lt;code&gt;printf&lt;/code&gt; statements. While the practice of logging has been with us since the earliest days of computing, too many teams still treat it as an afterthought.&lt;/p&gt;

&lt;p&gt;The consequences are all too familiar: the shocking cloud bill for debug logs that were never removed, the afternoon wasted trying to make sense of logs that say everything and nothing at the same time, and the thankless task of manually correlating events across services when your tools should have done it for you.&lt;/p&gt;

&lt;p&gt;This guide is about fixing that. &lt;a href="https://www.dash0.com/knowledge/logs-metrics-and-traces-observability" rel="noopener noreferrer"&gt;Logs aren't the whole observability story&lt;/a&gt;, but they can be transformed from unstructured strings scattered through a codebase into useful signals that drive real insight. The following checklist of best practices will help you do just that.&lt;/p&gt;

&lt;p&gt;Let's begin!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Start with structured logging
&lt;/h2&gt;

&lt;p&gt;Unstructured, string-formatted logs are an anti-pattern in modern systems. If you're still writing logs designed to be read by someone running &lt;code&gt;grep&lt;/code&gt; on a server, you're not just behind the times, you're actively building an un-observable system.&lt;/p&gt;

&lt;p&gt;Logs are data and must be treated as such from the moment of creation. This means every log entry must be a structured, machine-parsable object (&lt;a href="https://www.dash0.com/guides/structured-logging-for-modern-applications" rel="noopener noreferrer"&gt;JSON&lt;/a&gt; is the lingua-franca here). Every piece of information becomes a distinct key-value pair, ready to be indexed, queried, and aggregated.&lt;/p&gt;

&lt;p&gt;Instead of emitting a blob of text like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2025-07-24 07:45:10] INFO: Payment processed for user 12345 in 54ms. Request ID: abc-xyz-789
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'd output a queryable data record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-07-24T06:45:10.123Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"payment processed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"billing-api"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"duration_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12345"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"trace_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc-xyz-789"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This fundamentally changes how you interact with your logs, moving you from clumsy regex-based text search to a more powerful, query-based analysis.&lt;/p&gt;

&lt;p&gt;This makes it trivial to answer questions like: "How many payment operations failed in the last hour?" or "Show me all logs tied to request &lt;code&gt;abc-xyz-789&lt;/code&gt;", or even "Which logs are associated with user &lt;code&gt;12345&lt;/code&gt; today?" with a simple fast query.&lt;/p&gt;

&lt;p&gt;Most modern logging frameworks now support or default to emitting structured output out of the box, and languages are starting to treat it as a core feature rather than an optional add-on (&lt;a href="https://www.dash0.com/guides/logging-in-go-with-slog" rel="noopener noreferrer"&gt;see Go's log/slog package&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Establish an observability contract with OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;Adopting structured logging is just the first step. Without a shared standard, you're still left with the chaos of inconsistent names and semantics across services.&lt;/p&gt;

&lt;p&gt;One service might log a user ID as &lt;code&gt;"user_id": "12345"&lt;/code&gt;, another as &lt;code&gt;"userId": "12345"&lt;/code&gt;, and yet another as &lt;code&gt;"customer": { "id": "12345" }&lt;/code&gt;. Multiply that inconsistency across dozens of services, and the result is that observability becomes nearly impossible since everyone is speaking a different language.&lt;/p&gt;

&lt;p&gt;To fix this, you need to establish an observability contract: a single, enforced schema for telemetry across all your services. This is where OpenTelemetry (OTel) becomes your foundation, providing a common structure (&lt;a href="https://www.dash0.com/knowledge/opentelemetry-protocol-otlp#how-logs-are-represented-in-otlp" rel="noopener noreferrer"&gt;the log data model&lt;/a&gt;) and a common vocabulary (&lt;a href="https://www.dash0.com/knowledge/otel-semantic-conventions-explainer" rel="noopener noreferrer"&gt;semantic conventions&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The good news is that you don't have to rip out your existing instrumentation. OpenTelemetry provides two clear paths (or "bridges") for bringing your logs into compliance.&lt;/p&gt;

&lt;p&gt;For applications you control, you can integrate OTel directly with your existing logging library through an appender (or exporter). This component intercepts structured logs from your library, translates them to the OTel model in memory, and sends them directly to a collector or backend.&lt;/p&gt;

&lt;p&gt;For legacy or third-party systems you can't change, let them continue writing to &lt;code&gt;stdout&lt;/code&gt; or local files and have the &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;OTel Collector&lt;/a&gt; ingest and transform them into the OTel model before forwarding them.&lt;/p&gt;

&lt;p&gt;Finally, a contract is useless without enforcement. Use tools like the &lt;a href="https://github.com/open-telemetry/weaver" rel="noopener noreferrer"&gt;OpenTelemetry weaver&lt;/a&gt; to document and enforce a core schema of attributes that are mandatory for every service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg4tv5kxhzfn30fyjisd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg4tv5kxhzfn30fyjisd.png" alt="OpenTelemetry weaver checks" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then back it up with automation in your CI/CD pipeline so that builds that introduces unapproved attributes or omits required ones are automatically failed, making compliance a baked-in part of your engineering standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Enrich your logs with sufficient context
&lt;/h2&gt;

&lt;p&gt;With the contract in place, you must ensure that every log is enriched with enough context to be useful on its own. &lt;strong&gt;A log message without context is just noise&lt;/strong&gt;. Every log should be able to answer critical questions about its origin, scope, and intent.&lt;/p&gt;

&lt;p&gt;You can think of context in two layers:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The platform context
&lt;/h3&gt;

&lt;p&gt;This is the environmental and request-level context that should be attached to every single log automatically with zero developer effort. This tells you &lt;em&gt;where&lt;/em&gt; the log came from, and what it is related to.&lt;/p&gt;

&lt;p&gt;This is where your platform engineering shines, and OpenTelemetry provides the tools to make this automation seamless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The OTel SDK automatically injects &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;span_id&lt;/code&gt;, linking logs to the specific request that generated them.&lt;/li&gt;
&lt;li&gt;The OTel Collector can automatically detect and attach metadata from the host environment, such as &lt;code&gt;cloud.provider&lt;/code&gt;, &lt;code&gt;cloud.region&lt;/code&gt;, &lt;code&gt;k8s.pod.name&lt;/code&gt;, and &lt;code&gt;service.version&lt;/code&gt; using processors like the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor" rel="noopener noreferrer"&gt;resourcedetectionprocessor&lt;/a&gt; and &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sattributesprocessor" rel="noopener noreferrer"&gt;k8sattributesprocessor&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layer is your foundation. It ensures that even the most basic log can be traced back to a specific request and service instance, in a specific region, running a specific version.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The event context
&lt;/h3&gt;

&lt;p&gt;This is the business-level context that only your application code knows. It's the most valuable information for debugging and the hardest to get right. Examples include entity identifiers and other domain-specific attributes that explain what action was attempted and why.&lt;/p&gt;

&lt;p&gt;You shouldn't rely on manual effort alone here. Instead, establish a pattern where a context-aware logger (available in most logging frameworks) is injected into the request lifecycle.&lt;/p&gt;

&lt;p&gt;With this pattern, once relevant identifiers are known during request handling, they automatically flow to every downstream log line without extra effort on your part.&lt;/p&gt;

&lt;p&gt;By layering automated platform context with systematically-injected event-level context, you'll create logs that are "born correlated", turning them from isolated messages to a rich, connected narrative of system behavior.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Logging_Cheat_Sheet.md#event-attributes" rel="noopener noreferrer"&gt;OWASP logging cheat sheet&lt;/a&gt; provides a solid reference for useful event attributes. Just be sure to align with OTel's semantic conventions when naming them so your logs remain consistent and interoperable.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use log levels as an actionable signal
&lt;/h2&gt;

&lt;p&gt;Few topics in logging generate as much debate as the proper use of &lt;a href="https://www.dash0.com/knowledge/log-levels" rel="noopener noreferrer"&gt;severity levels&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some argue for simplicity: &lt;a href="https://ntietz.com/blog/the-only-two-log-levels-you-need-are-info-and-error/" rel="noopener noreferrer"&gt;just use &lt;code&gt;INFO&lt;/code&gt; and &lt;code&gt;ERROR&lt;/code&gt;&lt;/a&gt; to express that your system is either doing what it's supposed to or it isn't.&lt;/p&gt;

&lt;p&gt;That approach may look clean on paper, but in practice it collapses too much nuance. Not every anomaly is a failure, and not every failure requires paging someone. By reducing the vocabulary, you collapse important distinctions and lose the ability to separate actionable signals from supporting detail.&lt;/p&gt;

&lt;p&gt;Adopting more granular levels like &lt;code&gt;DEBUG&lt;/code&gt;, &lt;code&gt;WARN&lt;/code&gt; or &lt;code&gt;FATAL&lt;/code&gt; is far more effective. They encode meaningful distinctions: a &lt;code&gt;WARN&lt;/code&gt; typically highlights something unusual and actionable but not urgent (like deprecation warnings), &lt;code&gt;ERROR&lt;/code&gt; flags an actual failure, &lt;code&gt;FATAL&lt;/code&gt; signals an unrecoverable condition that leads a process to terminate, and &lt;code&gt;DEBUG&lt;/code&gt; captures detail for investigations without polluting production by default.&lt;/p&gt;

&lt;p&gt;To use them well, a few principles apply:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To avoid the "&lt;a href="https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf" rel="noopener noreferrer"&gt;boy who cried wolf&lt;/a&gt;" scenario, log levels should reflect severity and actionability. If no human needs to take action, it shouldn't be logged at a level that triggers an alert.&lt;/li&gt;
&lt;li&gt;The hardest part of logging is dialing verbosity to the right level. Too much, and you balloon costs, slow down systems, and bury engineers in noise; too little, and there's nothing useful to debug with. Verbose logs have their place, but they should be scoped, short-lived, and never treated as the default.&lt;/li&gt;
&lt;li&gt;Ensure log verbosity can be adjusted on the fly, whether for a service, a module, or even a specific user, without redeploying. This flexibility is critical when investigating live incidents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Log levels aren't the main value of a log, but they're a powerful signal when used consistently. They help you separate the routine from the exceptional and surface low-level details only when it's needed most. Dropping down to just two levels throws away that signal for no good reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Keep sensitive data out of your logs
&lt;/h2&gt;

&lt;p&gt;One of the biggest risks in logging is the accidental inclusion of Personally Identifiable Information (PII) or other sensitive data. Of all the mistakes you can make, this is the one most likely to land your company on the front page for the wrong reasons.&lt;/p&gt;

&lt;p&gt;High-profile slip-ups from &lt;a href="https://www.bleepingcomputer.com/news/security/twitter-admits-recording-plaintext-passwords-in-internal-logs-just-like-github/" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.bleepingcomputer.com/news/security/github-accidentally-recorded-some-plaintext-passwords-in-its-internal-logs/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; where plaintext passwords were accidentally written to internal logs, serve as stark reminders of how serious and easy-to-make this mistake can be.&lt;/p&gt;

&lt;p&gt;Such leaks rarely happen with malicious intent. They happen because a developer, focused on their immediate task, is unaware of the downstream security implications of their logging choices.&lt;/p&gt;

&lt;p&gt;The only effective defense is systemic and multi-layered. You have to assume mistakes will happen and design safeguards that catch them at multiple points.&lt;/p&gt;

&lt;p&gt;At the application level, avoid logging entire objects that may contain sensitive fields. Instead, implement logging-safe representations that exclude or mask PII by default. This means that new attributes on the object may need to be allowlisted before they are logged.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// User is a domain object that may contain sensitive fields.&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt;        &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;Email&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;Password&lt;/span&gt;  &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;CreatedAt&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Time&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;LogValue&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="n"&gt;slog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Only allowlist the fields considered safe to log.&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;slog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GroupValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;slog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;slog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreatedAt&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For ad-hoc structures, most logging frameworks allow you to configure a "middleware" to automatically mask or scrub data from known sensitive fields whose key matches a blocklist before the log is written.&lt;/p&gt;

&lt;p&gt;Finally, &lt;a href="https://www.dash0.com/guides/scrubbing-sensitive-data-with-opentelemetry" rel="noopener noreferrer"&gt;apply sensitive-data scrubbing&lt;/a&gt; in the OpenTelemetry Collector (or equivalent) to prevent anything from slipping through before logs leave your systems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# One of the most effective techniques is using an attribute allowlist&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;redaction/allowlist&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;allow_all_keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;allowed_keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http.method&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http.url&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http.status_code&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;attributes/allowlist&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exportes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sensitive data logging mistakes are easy to make, but with layered defenses you can drastically reduce their impact. The most effective solutions are not those that blame individuals, but those that foster a culture of shared ownership over the quality and security of observability data.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Treat logging performance as a first-class concern
&lt;/h2&gt;

&lt;p&gt;It's sometimes easy to forget that &lt;a href="https://tersesystems.com/blog/2020/07/09/logging-vs-memory/" rel="noopener noreferrer"&gt;logging isn't free&lt;/a&gt;. Every log line consumes CPU, memory, and I/O. At small scale you'll barely notice, but at scale, it can become a real bottleneck.&lt;/p&gt;

&lt;p&gt;To prevent logging from slowing down your services, adopt the following practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose performant libraries
&lt;/h3&gt;

&lt;p&gt;Logging libraries vary widely in performance. Start by choosing modern, efficient libraries that are designed to minimize overhead. For example, in the Go ecosystem, the built-in &lt;code&gt;log/slog&lt;/code&gt; or libraries like &lt;a href="https://pkg.go.dev/github.com/rs/zerolog" rel="noopener noreferrer"&gt;Zerolog&lt;/a&gt; and &lt;a href="https://www.dash0.com/guides/logging-in-go-with-zap" rel="noopener noreferrer"&gt;Zap&lt;/a&gt; are &lt;a href="https://github.com/uber-go/zap#performance" rel="noopener noreferrer"&gt;orders of magnitude faster&lt;/a&gt; than older options like &lt;a href="https://github.com/sirupsen/logrus" rel="noopener noreferrer"&gt;Logrus&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log asynchronously
&lt;/h3&gt;

&lt;p&gt;Don't let your main application thread wait for a log to be written to disk or the network. Instead, log asynchronously by writing messages to a fast, in-memory buffer and letting a separate background process handle the slower I/O operations so that logging has virtually no impact on request latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Be mindful of hot paths
&lt;/h3&gt;

&lt;p&gt;Avoid placing verbose log statements inside high-frequency loops or critical code paths. For diagnostics in these areas, use intelligent sampling or rate-limiting to gather insights without overwhelming the system with a flood of logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defer expensive operations
&lt;/h3&gt;

&lt;p&gt;A more subtle performance issue lies in how logging arguments are evaluated. For example, a log statement like &lt;code&gt;logger.debug(f"Processing {x}")&lt;/code&gt; in &lt;a href="https://www.dash0.com/guides/logging-in-python" rel="noopener noreferrer"&gt;Python&lt;/a&gt; evaluates the formatted string even if the &lt;code&gt;DEBUG&lt;/code&gt; level is disabled.&lt;/p&gt;

&lt;p&gt;The better pattern, &lt;code&gt;logger.debug("Processing %s", x)&lt;/code&gt;, defers the string formatting until it's certain the message will be emitted, saving precious cycles in critical code.&lt;/p&gt;

&lt;p&gt;The bottom line is to treat logging code like your business logic. If you wouldn't block a request or allocate unnecessary memory in a critical path, don't let your logging library do it either.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Manage log volume and cost intelligently
&lt;/h2&gt;

&lt;p&gt;One of the quickest ways to burn money, overwhelm your systems, and blind your engineers is to let log volume grow unchecked. At scale, ingestion and storage can run into millions per year, while noisy streams bury the very signals you need during incidents.&lt;/p&gt;

&lt;p&gt;The answer isn't to stop logging, but to log &lt;em&gt;smarter&lt;/em&gt;. The best place to start is at the source by cutting high-volume, low-value logs. A classic example is successful health checks from a load balancer. Those are far more useful as a metric than as endless log lines that add little value.&lt;/p&gt;

&lt;p&gt;For even more control, some logging frameworks allow you to log continuously to an in-memory ring buffer. Under normal conditions the buffer just overwrites itself, so nothing ever leaves memory. But if the system hits an error, the buffer is dumped along with the error log to ensure that the context from the moments leading up to the failure is preserved. It keeps volume low in the happy path while still capturing rich detail when it matters most.&lt;/p&gt;

&lt;p&gt;Another effective approach is sampling. It's rarely necessary to keep every single log, especially for routine events. You might capture all failed requests but only one out of every hundred successful ones and still get a representative view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zrw2kswsthajcik9u30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zrw2kswsthajcik9u30.png" alt="Illustration of savings before and after sampling" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sampling also helps during cascading failures: when a service starts spewing the same error on repeat, you don't need a million identical entries. A handful of representative samples tells the story just as well without overwhelming your system or ballooning your costs.&lt;/p&gt;

&lt;p&gt;One caveat is that OpenTelemetry doesn't yet support log sampling natively. Its sampling strategies apply only to traces, so you'll need to implement log sampling through your framework or observability pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Make the OpenTelemetry Collector the linchpin of your pipeline
&lt;/h2&gt;

&lt;p&gt;Shipping telemetry directly from your application to a backend works for small systems, but it doesn't scale. This approach leads to tight vendor coupling, performance bottlenecks, and operational headaches as you grow.&lt;/p&gt;

&lt;p&gt;A better architecture is to &lt;a href="https://www.dash0.com/guides/opentelemetry-collector" rel="noopener noreferrer"&gt;put the OpenTelemetry Collector at the center of your observability pipeline&lt;/a&gt;. The Collector is a performant, vendor-neutral service that can ingest all your telemetry data, process them, and then route them to any number of backends.&lt;/p&gt;

&lt;p&gt;It gives you a single place to enforce standards, redact sensitive fields, normalize formats, filter out noisy streams, or attach environment metadata automatically.&lt;/p&gt;

&lt;p&gt;And since it handles logs, metrics, and traces together, it can outright replace log-only agents like Fluent Bit, Logstash, or Filebeat at the edge, giving you a unified pipeline instead of a patchwork of single-purpose shippers.&lt;/p&gt;

&lt;p&gt;The payoff is flexibility. Want to send security events to one backend and application logs to another? Drop &lt;code&gt;DEBUG&lt;/code&gt; logs in production but keep them in staging? Insert new Kubernetes metadata without developer involvement? With the Collector as the linchpin, all of these become configuration changes instead of engineering projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Choose an OpenTelemetry-native backend
&lt;/h2&gt;

&lt;p&gt;You've done the hard work. You've implemented structured logging, established a schema with OpenTelemetry, enriched your logs with deep context, and built a robust pipeline with the Collector. The final step is to ensure your observability platform can capitalize on this investment.&lt;/p&gt;

&lt;p&gt;This is where &lt;a href="https://www.dash0.com/blog/opentelemetry-native-the-future-of-observability" rel="noopener noreferrer"&gt;choosing an OpenTelemetry-native platform&lt;/a&gt; becomes critical. An OTel-native platform isn't just a backend that can accept &lt;a href="https://www.dash0.com/knowledge/opentelemetry-protocol-otlp" rel="noopener noreferrer"&gt;OTLP data&lt;/a&gt;; its one whose entire data model is built around the OpenTelemetry standard.&lt;/p&gt;

&lt;p&gt;This means it inherently understands the intrinsic relationships between your signals and treats semantic conventions as first-class citizens, not just another set of tags.&lt;/p&gt;

&lt;p&gt;It knows that &lt;code&gt;db.query.text&lt;/code&gt; isn't just a string but database query, and can parse the statement to identify the operation, highlight slow queries and provide pre-built dashboards for database performance. These are the principles &lt;a href="https://www.dash0.com/" rel="noopener noreferrer"&gt;Dash0&lt;/a&gt; is built on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6og47d2te9vq6goemzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6og47d2te9vq6goemzj.png" alt="Screenshot of Dash0's logging interface" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By pairing your high-quality instrumentation with a platform that speaks the same native language, you ensure the payoff for all your effort is not just better data, but faster debugging, clearer insights, and more reliable systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;When done well, logging shifts from being a costly liability to a force multiplier for cutting incident response times, reducing operational overhead, and making your systems more reliable.&lt;/p&gt;

&lt;p&gt;The next time something breaks in production (and it will) the quality of your logs will determine whether you're guessing in the dark or diagnosing with confidence.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>logging</category>
      <category>observability</category>
      <category>opentelemetry</category>
    </item>
  </channel>
</rss>
