<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: solgitae</title>
    <description>The latest articles on Forem by solgitae (@solgitae).</description>
    <link>https://forem.com/solgitae</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/solgitae"/>
    <language>en</language>
    <item>
      <title>When a Memory Pool Actually Helps in Go Logging</title>
      <dc:creator>solgitae</dc:creator>
      <pubDate>Wed, 01 Apr 2026 15:33:58 +0000</pubDate>
      <link>https://forem.com/solgitae/when-a-memory-pool-actually-helps-in-go-logging-l3o</link>
      <guid>https://forem.com/solgitae/when-a-memory-pool-actually-helps-in-go-logging-l3o</guid>
      <description>&lt;p&gt;When you build a high-throughput log pipeline in Go, the garbage collector quickly becomes one of your biggest bottlenecks. Every log line means new allocations: buffers, temporary structs, parsed JSON trees, and so on. At some point, you start wondering: is it time to use a memory pool?&lt;/p&gt;

&lt;p&gt;In this post I’ll walk through a simple pattern using sync.Pool and explain when it is (and is not) a good idea for log pre-processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The basic pattern
&lt;/h2&gt;

&lt;p&gt;For log processing, the most common thing to pool is a reusable byte buffer or struct used per log line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;
&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;logBufferPool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sync&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="n"&gt;any&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="m"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// 64KB buffer&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;handleLog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// 1. Take a buffer from the pool&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;logBufferPool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// 2. Use it for parsing / masking / rewriting&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;          &lt;span class="c"&gt;// reset length, keep capacity&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// do your processing on buf&lt;/span&gt;

    &lt;span class="c"&gt;// 3. Return it to the pool&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;logBufferPool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key detail is buf = buf[:0]: this does not allocate a new array. It just resets the slice length to zero while keeping the underlying capacity. Combined with sync.Pool, this lets you reuse the same backing arrays across many log lines instead of calling make([]byte, …) thousands of times per second.&lt;/p&gt;

&lt;p&gt;In a log pre-processor that parses, reshapes, and masks JSONL lines, this pattern can eliminate a large portion of per-line heap allocations and directly reduce GC pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  When a pool is a good idea
&lt;/h2&gt;

&lt;p&gt;You don’t need a pool everywhere. It shines in a few specific scenarios:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High and steady throughput&lt;/strong&gt;&lt;br&gt;
If your service is processing tens of thousands of log lines per second, even small per-line allocations add up quickly and trigger frequent GC cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same shape, repeated many times&lt;/strong&gt;&lt;br&gt;
You repeatedly allocate the same “shape” of object: for example, []byte buffers of similar capacity, or small structs used during parsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short‑lived scratch objects&lt;/strong&gt;&lt;br&gt;
The pooled objects are temporary scratch space inside a single request/log handling path, and you fully reset them before reuse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You’ve profiled, and GC is the problem&lt;/strong&gt;&lt;br&gt;
Profiling shows a hot path dominated by allocations, and reducing those allocations produces measurable throughput or latency improvement.&lt;/p&gt;

&lt;p&gt;A log pre-processing engine is almost a textbook case: each log line goes through the same transformation pipeline, uses similar-sized buffers and temporary objects, and then discards them. A pool lets you recycle that working set instead of constantly asking the runtime for new memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  When you should avoid it
&lt;/h2&gt;

&lt;p&gt;There are also clear cases where sync.Pool is not worth the cost:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low QPS or batch tools&lt;/strong&gt;&lt;br&gt;
If your program processes a small batch of logs once in a while, the extra complexity of pooling won’t pay off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long‑lived objects&lt;/strong&gt;&lt;br&gt;
Pools are for scratch space, not for objects that live across requests or goroutines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highly variable sizes&lt;/strong&gt;&lt;br&gt;
If your buffers/objects have wildly different sizes, you either waste memory or end up implementing a complex “pool of pools”. At that point, a custom allocator or a simpler design might be better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You haven’t measured anything&lt;/strong&gt;&lt;br&gt;
sync.Pool is not a free optimization. If the code is not allocation-bound, you’re just making it harder to read for no gain.&lt;/p&gt;

&lt;p&gt;It’s also important to remember that sync.Pool is not a strict cache. The runtime is free to drop items from the pool on any GC cycle, so you cannot use it as a reliable long‑term free list.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical tips
&lt;/h2&gt;

&lt;p&gt;A few practical rules that have worked well for log-style workloads:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start simple, then optimize&lt;/strong&gt;&lt;br&gt;
First write straightforward code using bytes.Buffer or plain slices, then add a pool only around the clearly hot pieces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always reset before reuse&lt;/strong&gt;&lt;br&gt;
For slices, use buf = buf[:0]. For structs, add a Reset method that clears all fields. This avoids leaking data between log lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Return objects promptly&lt;/strong&gt;&lt;br&gt;
Use defer pool.Put(x) near the Get call when possible, to avoid forgetting to put it back in some branches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Re‑profile after adding the pool&lt;/strong&gt;&lt;br&gt;
Confirm that the change actually reduced allocations and GC time. If the improvement is negligible, delete the pool and keep the simpler code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;Memory pools are not a silver bullet, but in a high-volume logging or log pre-processing system they can be the difference between a GC-bound pipeline and a stable, predictable one. If your log engine spends most of its time allocating the same buffers over and over, pooling those buffers is one of the lowest‑hanging fruits you can pick.&lt;/p&gt;

</description>
      <category>backend</category>
      <category>go</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
