<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kyrylo Rud</title>
    <description>The latest articles on Forem by Kyrylo Rud (@kyrylorud).</description>
    <link>https://forem.com/kyrylorud</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kyrylorud"/>
    <language>en</language>
    <item>
      <title>PMR Beyond Performance: Deterministic POSIX Signals</title>
      <dc:creator>Kyrylo Rud</dc:creator>
      <pubDate>Tue, 17 Feb 2026 07:15:00 +0000</pubDate>
      <link>https://forem.com/kyrylorud/pmr-beyond-performance-deterministic-posix-signals-58pf</link>
      <guid>https://forem.com/kyrylorud/pmr-beyond-performance-deterministic-posix-signals-58pf</guid>
      <description>&lt;p&gt;PMR is usually perceived as a performance tool - something to address allocation and deallocation bottlenecks or to build memory models for RTOS-like environments. In practice, its scope is much broader. It has been part of the standard since C++17, yet it is rarely brought up in interviews or public discussions outside of HFT (high frequency trading) or gamedev.&lt;/p&gt;

&lt;p&gt;When dealing with signal handling, especially in constrained environments, there are less obvious scenarios where PMR becomes useful. Not because it is faster, but because it provides explicit control over memory usage. Historically, similar problems were solved using C-style stack arenas or manually managed buffers. PMR offers the same idea, but expressed in standard C++ and integrated directly with STD containers, without dropping back to raw C-style memory management.&lt;/p&gt;

&lt;p&gt;It brings controlled allocation semantics in places where heap interaction is either undesirable or outright forbidden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Framing
&lt;/h2&gt;

&lt;p&gt;This technique &lt;strong&gt;is not a general recommendation for signal handling&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It is relevant only in environments where a signal handler with some minimal processing is architecturally unavoidable. The goal is not to make signal handling "safe", but to reduce fragility under constrained conditions.&lt;/p&gt;

&lt;p&gt;This approach is justified only when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recovery or reporting must happen in-process;&lt;/li&gt;
&lt;li&gt;External crash reporters or core dumps are unavailable or forbidden;&lt;/li&gt;
&lt;li&gt;The required work is strictly bounded and deterministic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Outside of these constraints, standard crash handling mechanisms remain the correct and preferable solution.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Here we start
&lt;/h3&gt;

&lt;p&gt;During my career, I have encountered several situations where I had to implement custom &lt;code&gt;SIGSEGV&lt;/code&gt; handlers and perform non-trivial work inside them. These were not ideal designs. In some cases, solutions were pragmatic rather than elegant, driven by constraints rather than preference.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In one case, I had to collect a backtrace using &lt;code&gt;libunwind&lt;/code&gt; and then post-process it to produce meaningful diagnostic output written directly to &lt;code&gt;stderr&lt;/code&gt;. Using something like &lt;code&gt;libSegFault&lt;/code&gt; was not an option because additional libraries could not be shipped to the target system and because the required backtrace transformation logic was too specific.&lt;/li&gt;
&lt;li&gt;In another project, &lt;code&gt;SIGSEGV&lt;/code&gt; handling combined with jump was part of the legacy architecture. After recovering control flow, the system required additional string processing and some container manipulation to restore a consistent internal state. The architecture could not be redesigned, so the handler had to operate within those constraints.&lt;/li&gt;
&lt;li&gt;There was no third case, and I hope there will not be. Signal handlers are not something I appreciate having in a binary, except for simple termination logic in service-style applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;The common theme in these cases was not preference, but constraint.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  POSIX constraints
&lt;/h3&gt;

&lt;p&gt;What makes the situation more complex is that signal handlers themselves operate under strict POSIX requirements.&lt;br&gt;
In practice, these constraints can be summarized as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only async-signal-safe functions may be called (&lt;a href="https://man7.org/linux/man-pages/man7/signal-safety.7.html" rel="noopener noreferrer"&gt;list of functions&lt;/a&gt;):

&lt;ul&gt;
&lt;li&gt;In practice this mostly means no dynamic memory allocation (&lt;code&gt;new&lt;/code&gt;/&lt;code&gt;delete&lt;/code&gt; and &lt;code&gt;malloc&lt;/code&gt;/&lt;code&gt;free&lt;/code&gt; are not permitted);&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;No dependency on shared mutable state:

&lt;ul&gt;
&lt;li&gt;No locks;&lt;/li&gt;
&lt;li&gt;No reliance on global mutable objects;&lt;/li&gt;
&lt;li&gt;errno must be preserved;&lt;/li&gt;
&lt;li&gt;No stream flushing;&lt;/li&gt;
&lt;li&gt;The handler must not return into an undefined state, etc.
These rules exist for good reasons, but they significantly narrow what can be done inside a handler.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Abstraction collapse
&lt;/h3&gt;

&lt;p&gt;The practical consequence is that most idiomatic C++ becomes unusable. No standard containers, no convenient string processing, no algorithms, no higher-level abstractions. Everything must be reduced to manual C-style handling.&lt;/p&gt;

&lt;p&gt;This is not only inconvenient. It increases complexity, introduces boilerplate, and makes the code harder to read and maintain. Engineers are forced to reimplement low-level logic even when well-tested C++ abstractions already exist. Over time, this creates fragile code paths in precisely the parts of the system that are already operating under failure conditions.&lt;/p&gt;
&lt;h2&gt;
  
  
  Here is where PMR comes
&lt;/h2&gt;

&lt;p&gt;Some of the signal handler constraints are manageable. Streams &amp;amp; flushing can be replaced with a direct &lt;code&gt;write&lt;/code&gt; call. The &lt;code&gt;errno&lt;/code&gt; can be restored before exiting. Lock-free code is not particularly difficult when the handler logic is minimal.&lt;/p&gt;

&lt;p&gt;The real difficulty appears when higher-level processing is required, especially string manipulation or limited container usage. In a signal context, the usual abstraction layers collapse. The heap cannot be trusted, global state may be inconsistent, and most idiomatic C++ becomes unusable. As a result, everything is pushed down to manual C-style handling with fixed buffers and defensive size calculations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The core issue is not that C strings are inconvenient. The issue is that we lose a controlled allocation domain.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is where PMR becomes interesting.&lt;/p&gt;

&lt;p&gt;PMR allows us to define a small, bounded memory region - for example, backed by a stack buffer - and treat it as an isolated allocation environment. Containers and strings can operate inside this region without touching the global heap. Allocation becomes deterministic and locally scoped. No global allocator interaction, no hidden heap calls, no cross-boundary side effects.&lt;/p&gt;

&lt;p&gt;In this context, PMR is not a performance tool. It is a way to create a controlled micro-environment inside an otherwise unstable execution state. By routing all container allocations through a stack-backed &lt;code&gt;memory_resource&lt;/code&gt;, we regain limited use of C++ abstractions while following the fundamental restriction: &lt;strong&gt;no dynamic heap interaction&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Using PMR to allocate all container elements from a predefined stack buffer is often all that is required.&lt;/p&gt;
&lt;h3&gt;
  
  
  How big of a stack do we need?
&lt;/h3&gt;

&lt;p&gt;On modern OSes, a typical thread stack is on the &lt;strong&gt;order of megabytes&lt;/strong&gt; (often around 8 MB by default). For the kind of work we are discussing - bounded string formatting, a few small containers, some backtrace post-processing - this is usually plenty. For a rough mental model, even fairly large text payloads are tiny compared to megabytes (&lt;em&gt;the plain text of LOTR is only tens of kilobytes&lt;/em&gt;). So, the default alternative stack sizes provided by modern systems are typically sufficient for the strictly limited and minimal processing expected inside such a handler.&lt;/p&gt;

&lt;p&gt;However, in a &lt;code&gt;SIGSEGV&lt;/code&gt; scenario we must not rely on the current thread stack. If the signal was triggered due to stack corruption or overflow, that memory cannot be considered reliable. This is why the practical "stack budget" for PMR in a handler is not the normal thread stack, but &lt;em&gt;the alternative signal stack&lt;/em&gt; configured via &lt;code&gt;sigaltstack&lt;/code&gt;, which we will get to later. At least size order is clear for now.&lt;/p&gt;

&lt;p&gt;The key point remains the same: the memory must already exist and be bounded, because the handler must not allocate dynamically. Heap allocation is not async-signal-safe under POSIX, so &lt;code&gt;new&lt;/code&gt;/&lt;code&gt;delete&lt;/code&gt; and &lt;code&gt;malloc&lt;/code&gt;/&lt;code&gt;free&lt;/code&gt; are off the table.&lt;/p&gt;
&lt;h3&gt;
  
  
  Basic setup
&lt;/h3&gt;

&lt;p&gt;Conceptually, the setup is straightforward. We pre-allocate a fixed buffer in memory and construct a &lt;code&gt;std::pmr::monotonic_buffer_resource&lt;/code&gt; on top of it. That resource becomes the allocation domain for all containers used inside the handler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt; &lt;span class="c1"&gt;// Stack-backed allocation buffer&lt;/span&gt;
&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;monotonic_buffer_resource&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;null_memory_resource&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// Forbid fallback to global heap&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; A monotonic buffer resource performs linear allocation within a pre-defined memory region. It does not free individual objects. Memory is reclaimed only when the resource itself is destroyed. In the context of a signal handler, this behavior is desirable: the lifetime of all temporary objects is tied to the lifetime of the handler invocation itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The only parameter that requires deliberate consideration is the size of the buffer. It must be large enough to accommodate the worst-case amount of memory required by all containers and strings constructed inside the handler. Since the work performed in the handler &lt;strong&gt;must be strictly bounded and deterministic&lt;/strong&gt;, the required size can and should be &lt;strong&gt;estimated conservatively&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Containers are then constructed using the memory resource explicitly. Any &lt;code&gt;std::pmr&lt;/code&gt; container instantiated with this resource will allocate exclusively from the provided buffer and will not fall back to the global heap (this is guaranteed by &lt;code&gt;std::pmr::null_memory_resource&lt;/code&gt; as an upstream allocator).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"some text buffer"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;decltype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;emplace_back&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;str&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is limited but controlled use of C++ abstractions within a strictly bounded memory region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Following the standard
&lt;/h3&gt;

&lt;p&gt;The minimal example will work on most systems, but according to the standard, a &lt;code&gt;std::pmr::memory_resource::allocate&lt;/code&gt; call must return storage that is properly aligned for the requested type. &lt;strong&gt;Otherwise&lt;/strong&gt;, if the underlying buffer does not satisfy the required alignment - it is &lt;strong&gt;UB&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The buffer used to back the &lt;code&gt;monotonic_buffer_resource&lt;/code&gt; must therefore be aligned at least to &lt;code&gt;std::max_align_t&lt;/code&gt;, or to any stricter alignment that might be required by the types allocated within it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;alignas&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;max_align_t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Allocates storage with a size of at least bytes bytes, aligned to the specified alignment.&lt;br&gt;
&lt;a href="https://en.cppreference.com/w/cpp/memory/memory_resource/allocate.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With correct alignment in place, the PMR containers can be safely instantiated on top of the resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remaining risks on the table
&lt;/h2&gt;

&lt;p&gt;Using PMR inside a signal handler does not make the handler POSIX-safe. It &lt;strong&gt;removes&lt;/strong&gt; only one, but &lt;strong&gt;major failure source&lt;/strong&gt;: &lt;em&gt;heap interaction during an unstable runtime state&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Even with a stack-backed allocation domain, significant risks remain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;corrupted stack or registers;&lt;/li&gt;
&lt;li&gt;undefined C++ runtime state;&lt;/li&gt;
&lt;li&gt;library code that is not async-signal-safe.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With correct alignment in place, PMR containers can be safely instantiated on top of the resource. However, this only addresses allocation mechanics. It does not eliminate the broader execution risks inherent to signal handling.&lt;/p&gt;

&lt;p&gt;Strict constraints must still be respected:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid code paths that may throw exceptions. Unwinding through potentially corrupted state is undefined territory;&lt;/li&gt;
&lt;li&gt;Avoid synchronization primitives or any form of parallelism. No locks, no hidden synchronization inside abstractions;&lt;/li&gt;
&lt;li&gt;Keep the work strictly bounded and deterministic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the typical scenario discussed here - limited string formatting or small container manipulation - these constraints are technically manageable.&lt;br&gt;
However, care must be taken to avoid accidental violations. Parallelism, for example, can be introduced indirectly through execution policies in standard algorithms.&lt;br&gt;
Even without explicitly creating threads, such behavior may break the assumptions required for safe execution inside a signal handler.&lt;/p&gt;

&lt;p&gt;Therefore, the real guarantee of this approach is limited to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reduced fragility under controlled assumptions - not absolute correctness.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Prepare stack for the PMR
&lt;/h3&gt;

&lt;p&gt;One of the most important measures is configuring an alternative signal stack via &lt;a href="https://man7.org/linux/man-pages/man2/sigaltstack.2.html" rel="noopener noreferrer"&gt;&lt;code&gt;sigaltstack&lt;/code&gt;&lt;/a&gt;. This addresses at least two of the previously mentioned risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It isolates the handler from a potentially corrupted or overflowed main thread stack.&lt;/li&gt;
&lt;li&gt;It provides a predictable and explicitly controlled stack size for handler execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By running the handler on an alternative stack, we reduce dependency on the original execution context. If the signal was triggered due to stack overflow or memory corruption, continuing execution on the same stack would be unsafe. The alternative stack provides a separate, pre-allocated memory region dedicated to signal handling.&lt;/p&gt;

&lt;p&gt;The setup itself is straightforward and based on &lt;code&gt;SIGSTKSZ&lt;/code&gt;, which defines a minimal recommended stack size for signal handlers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// main:&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SIGSTKSZ&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;alt_stack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;stack_t&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt;
&lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_sp&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;operator&lt;/span&gt; &lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_flags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_size&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;sigaltstack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;nullptr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;runtime_error&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"sigaltstack failed: "&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;strerror&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errno&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The value of &lt;code&gt;SIGSTKSZ&lt;/code&gt; is platform-dependent and defined in &lt;code&gt;signal.h&lt;/code&gt;. In practice, it may range from relatively small values on lightweight systems to significantly larger defaults on desktop platforms. Because of this variability, it is reasonable to scale it conservatively or validate assumptions at compile time with &lt;code&gt;static_assert&lt;/code&gt;, depending on the expected workload of the handler.&lt;/p&gt;

&lt;p&gt;The important architectural point is that the alternative stack size must be sufficient not only for the PMR buffer but also for the execution overhead of the handler itself. The PMR buffer lives inside that stack context, so both data and control flow must fit within the allocated space.&lt;/p&gt;

&lt;p&gt;While this does not eliminate all risks associated with signal handling, it allows us to control stack integrity and stack capacity explicitly, which removes two major sources of uncertainty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it together
&lt;/h2&gt;

&lt;p&gt;With all components in place, the overall design becomes straightforward. The goal is to establish two controlled boundaries before the failure: a dedicated execution stack and a bounded allocation domain. Once those are prepared during normal program initialization, the handler itself becomes a small, deterministic unit operating inside explicitly defined limits.&lt;/p&gt;

&lt;p&gt;The flow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Allocate and register an alternative signal stack (&lt;code&gt;sigaltstack&lt;/code&gt;) so the handler does not depend on the potentially corrupted thread stack;&lt;/li&gt;
&lt;li&gt;Install a &lt;code&gt;SIGSEGV&lt;/code&gt; handler with &lt;code&gt;sigaction&lt;/code&gt;, enabling &lt;code&gt;SA_ONSTACK&lt;/code&gt; so the handler runs on that alternative stack;&lt;/li&gt;
&lt;li&gt;Inside the handler, create a fixed, aligned buffer and build a &lt;code&gt;std::pmr::monotonic_buffer_resource&lt;/code&gt; on top of it, with &lt;code&gt;std::pmr::null_memory_resource&lt;/code&gt; as the upstream. This ensures allocations stay local and never fall back to the heap;&lt;/li&gt;
&lt;li&gt;Use PMR-backed types (for example, &lt;code&gt;std::pmr::string&lt;/code&gt;) for minimal formatting, then emit output via an async-signal-safe API such as write, and terminate via &lt;code&gt;_exit&lt;/code&gt; to avoid returning into an unknown runtime state.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;array&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;cerrno&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;charconv&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;cstddef&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;cstdlib&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;cstring&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;memory_resource&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;new&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;stdexcept&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;
&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;signal.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;unistd.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;
&lt;span class="k"&gt;constexpr&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;signal_exit_base&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mh"&gt;0x80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// POSIX: 128 + signal number&lt;/span&gt;

&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;alignas&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;max_align_t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt;
   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;monotonic_buffer_resource&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;null_memory_resource&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;

   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"Signal "&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;pmr&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sc"&gt;'\0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;ptr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;to_chars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;resize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ptr&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;

   &lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="n"&gt;str&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s"&gt;" received!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;STDOUT_FILENO&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;str&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;str&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
   &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;_exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;signal_exit_base&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SIGSTKSZ&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;alt_stack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;stack_t&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt;
   &lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_sp&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;operator&lt;/span&gt; &lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_flags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ss_size&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;sigaltstack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;alt_stack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;nullptr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;runtime_error&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"sigaltstack failed: "&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;strerror&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errno&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;sigaction&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;{};&lt;/span&gt;
   &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sa_handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sa_flags&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;SA_ONSTACK&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;sigemptyset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sa_mask&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;sigaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SIGSEGV&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;nullptr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;runtime_error&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"sigaction failed: "&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;strerror&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errno&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;volatile&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;*&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;nullptr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// trigger SIGSEGV&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;EXIT_SUCCESS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you compile and run this with &lt;code&gt;ltrace&lt;/code&gt; (and optionally pipe through &lt;code&gt;c++filt&lt;/code&gt;), you can observe the behavior when the signal is delivered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the alternative stack setup happens during normal execution (before the crash);&lt;/li&gt;
&lt;li&gt;inside the handler, PMR allocations stay within the provided buffer;&lt;/li&gt;
&lt;li&gt;there is no interaction with the global heap allocator from within the handler path.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ltrace ./a.out some here  &amp;amp;&amp;gt;/dev/stdout | c++filt 
__libc_start_main(0xaaaac2042290, 3, 0xffffed93df28, 0xaaaac2041d18 &amp;lt;unfinished ...&amp;gt;
__register_frame_info(0xaaaac2046200, 0xaaaac2060018, 16, 0xffffb2baeab4) = 9
operator new(unsigned long)(0xc000, 0, 0xffffed93df48, 0xffffb2bae338) = 0xffffb2adf030
sigaltstack(0xaaaac2060048, 0, 0xffffffa0, 4044) = 0
sigemptyset(&amp;lt;&amp;gt;)                                  = 0
sigaction(SIGSEGV, { 0xaaaac20420c0, &amp;lt;&amp;gt;, 0, 0 }, nil) = 0
--- SIGSEGV (Segmentation fault) ---
memset(0xffffb2ae99a0, '\0', 1024)               = 0xffffb2ae99a0
std::pmr::null_memory_resource()(0xffffb2ae99a0, 0, -32, 0xffffb2ae9d40) = 0xffffb2ac0178
strlen("Signal ")                                = 7
memcpy(0xffffb2ae9968, "Signal ", 7)             = 0xffffb2ae9968
memset(0xffffb2ae99a0, '\0', 16)                 = 0xffffb2ae99a0
memcpy(0xffffb2ae996f, "11", 2)                  = 0xffffb2ae996f
strlen(" received!\n")                           = 11
memcpy(0xffffb2ae99b1, "Signal 11", 9)           = 0xffffb2ae99b1
memcpy(0xffffb2ae99ba, " received!\n", 11)       = 0xffffb2ae99ba
write(1, "Signal 11 received!\n", 20Signal 11 received!
)            = 20
_exit(139 &amp;lt;no return ...&amp;gt;
+++ exited (status 139) +++
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;exit status is preserved&lt;/strong&gt;, and it is clear that no heap allocation takes place inside the handler. This still does not make the handler &lt;em&gt;"POSIX safe"&lt;/em&gt; in the absolute sense. It only demonstrates that, from the source code perspective, the handler avoids the most common contract violations &lt;em&gt;(most notably: dynamic allocation and unsafe I/O abstractions)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;One optional hardening step is to compile the handler in a separate translation unit with stricter flags (for example, disabling exceptions). This can reduce accidental violations, but it also increases build complexity and can make integration more fragile. In most cases, keeping the handler minimal and explicitly bounded is the more practical approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;PMR has a place far beyond performance tuning. In certain constrained environments, it can serve as a controlled allocation boundary rather than an optimization tool.&lt;/p&gt;

&lt;p&gt;Used carefully, it allows limited reintroduction of C++ abstractions into contexts where they would normally be forbidden, such as inside a signal handler. &lt;strong&gt;It does not make signal handling safe or eliminate UB.&lt;/strong&gt; What it does is remove one significant failure vector - uncontrolled heap interaction - and &lt;strong&gt;reduce boilerplate compared to manual C-style buffer management&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As with anything related to POSIX signal safety, this approach must be applied conservatively and with a clear understanding of its limits, it can meaningfully improve clarity and robustness in otherwise fragile code paths.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>linux</category>
      <category>programming</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>When Big O Lies</title>
      <dc:creator>Kyrylo Rud</dc:creator>
      <pubDate>Sun, 16 Nov 2025 15:38:57 +0000</pubDate>
      <link>https://forem.com/kyrylorud/when-big-o-lies-o7l</link>
      <guid>https://forem.com/kyrylorud/when-big-o-lies-o7l</guid>
      <description>&lt;p&gt;Since the academy era and throughout documentation digging, the &lt;strong&gt;Big O&lt;/strong&gt; has been a &lt;em&gt;de-facto&lt;/em&gt; standard for evaluating algorithms: from sorting and look-up in a collection to insertion operations or even simple palindrome checks &lt;em&gt;(especially in interview live-coding sessions)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;However, efficiency is not the same as "&lt;em&gt;best to use in &lt;strong&gt;this&lt;/strong&gt; case&lt;/em&gt;".&lt;br&gt;
What actually measures Big O, how it scales and what could it lie about? Let’s dive deeper to the roots of the Big O and what could help developers to choose algorithms smarter. Especially when it comes to implementation, maintainability and long-term support of algorithms written by hand and not fetched from the standard library.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;This article focuses on practical understanding of algorithmic complexity for real-world engineering.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;The explanation of asymptotic notation is intentionally simplified and does not cover all formal mathematical details (Θ, Ω, ω, etc.).&lt;/em&gt;&lt;br&gt;
&lt;em&gt;All performance comparisons assume typical hardware behavior and ignore machine-specific optimizations unless stated.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;The small-N "impossible cases" are illustrative and not meant as practical recommendations.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  What the Big O measures?
&lt;/h2&gt;

&lt;p&gt;Big O is, foremost, a mathematical notation, defined by German mathematicians &lt;a href="https://en.wikipedia.org/wiki/Paul_Gustav_Heinrich_Bachmann" rel="noopener noreferrer"&gt;Paul Bachmann&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Edmund_Landau" rel="noopener noreferrer"&gt;Edmund Landau&lt;/a&gt;, and others, normally called the “Bachmann-Landau notation” or simply "Big O". It exists for more than 100 years, already.&lt;/p&gt;

&lt;p&gt;The "o" part comes from &lt;strong&gt;order of approximation&lt;/strong&gt;, basically "how fast something [algorithm complexity] grows".&lt;br&gt;
Grate, but where does the "big" come from? Actually, there are couple  'o' types.&lt;/p&gt;

&lt;p&gt;Analysis of an algorithm can be done in various ways, and the "O" notation helps to classify them into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Big O notation - the algorithm works &lt;strong&gt;at most&lt;/strong&gt; &lt;em&gt;this fast&lt;/em&gt;;&lt;/li&gt;
&lt;li&gt;Little-o notation - the algorithm works &lt;strong&gt;always strictly slower&lt;/strong&gt; than &lt;em&gt;this&lt;/em&gt;;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Other notations such as Big Omega, Little Omega, etc. (not covered in this article).&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; It is incorrect to state that the worst case of an algorithm is the same as the little-o notation. Little-o has &lt;strong&gt;nothing to do&lt;/strong&gt; with worst-case vs. best-case.&lt;br&gt;
While Big O allows the runtime to reach the upper bound, Little-o requires the runtime to eventually &lt;em&gt;never&lt;/em&gt; reach the stated bound. These statements might look confusing right now, but they will become clear a bit later.&lt;br&gt;
So, there is no practical way to express one in terms of the other, since they measure different metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; However, to understand the Big O easier, it can be simply stated: the function does not grow &lt;em&gt;faster than&lt;/em&gt; some constant multiplied by n.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Basically, the impracticality of the little-O (difficulty of proving that an algorithm would never reach a certain bound) led to the popularity, wide adoption, and everyday usage of the Big O in the programming world. &lt;em&gt;However, this order of things is mostly relevant to software engineering; in mathematics the situation is different.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It allows to understand &lt;strong&gt;the rate&lt;/strong&gt; at which an algorithm grows depending on the amount of input data. The key idea is: &lt;em&gt;it is not a guarantee of actual algorithm speed&lt;/em&gt;, but rather a &lt;strong&gt;reference to how complex it becomes&lt;/strong&gt; as the input size grows.&lt;/p&gt;

&lt;p&gt;This is where the idea of &lt;strong&gt;time-complexity&lt;/strong&gt; comes from: &lt;em&gt;the scaling behavior of runtime as &lt;strong&gt;N&lt;/strong&gt; increases&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Big O hides constants, machine effects, memory patterns, branch prediction, etc. Some algorithms may execute &lt;em&gt;(not work, but execute)&lt;/em&gt; faster despite the Big O analysis due to cache locality, compiler-level machine code optimizations, and similar factors.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Growth rate
&lt;/h2&gt;

&lt;p&gt;Any article that explains the Big O usually references the following picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23eg2osdutzgoey9rrik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23eg2osdutzgoey9rrik.png" alt="Big O - all graphs" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This picture shows multiple function graphs of the Big O &lt;em&gt;growth rate&lt;/em&gt;. Any algorithm can be expressed through one of these common &lt;strong&gt;complexity classes&lt;/strong&gt; &lt;em&gt;(function graphs)&lt;/em&gt;. However, it is not limited to them, and a more precise growth rate can be expressed with a custom equation.&lt;/p&gt;

&lt;p&gt;The following functions are &lt;em&gt;de-facto&lt;/em&gt; standards for most common algorithms. Their origin comes from mathematics and from &lt;em&gt;"natural"&lt;/em&gt; algorithmic patterns &lt;em&gt;(natural meaning: what you would intuitively expect from the algorithm's structure)&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O(n!) - exploring all permutations;&lt;/li&gt;
&lt;li&gt;O(2ⁿ) - exploring all subsets;&lt;/li&gt;
&lt;li&gt;O(n²) - nested loops or matrix operations;&lt;/li&gt;
&lt;li&gt;O(n log n) - divide-and-conquer patterns where data is split repeatedly and merged in linear time (e.g., merge sort, heapsort);&lt;/li&gt;
&lt;li&gt;O(n) - effectively one pass over each item of the data;&lt;/li&gt;
&lt;li&gt;O(√n) - geometric search spaces (e.g., 2-D grids);&lt;/li&gt;
&lt;li&gt;O(log n) - halving work, where each iteration removes half of the remaining data (e.g. binary search).&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; While O(log₂ n) &lt;em&gt;a.k.a. O(ln n)&lt;/em&gt; and O(log₁₀ n) may differ in actual runtime, they belong to the same complexity class — O(log n). &amp;gt; But the &lt;em&gt;distinction&lt;/em&gt; between them will be explained later.&lt;br&gt;
Meanwhile, the &lt;em&gt;O(n log n)&lt;/em&gt; is a different class altogether, combining linear and logarithmic growth.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But what would be faster: jump search or binary search?&lt;br&gt;
The natural answer is: binary search, of course, since the growth rate of O(log n) is lower than that of O(√n).&lt;/p&gt;

&lt;p&gt;The problem is that, apart from the &lt;em&gt;"implementation quality"&lt;/em&gt; of the algorithm (how clean the code is, whether there are redundant checks, etc.), there is always the &lt;strong&gt;K-factor&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  K-Factor and asymptotic analysis
&lt;/h2&gt;

&lt;p&gt;When the Big O is mentioned, it is never bound to the context of input data (what is actually expected). Complexity of the input algorithm is discovered from the infinite POV &lt;em&gt;(analyzed in the limit as input size approaches infinity)&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Any algorithm with complexity class O(log n) would be faster than O(√n). &lt;em&gt;Right?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It may sound obvious that one function's growth rate is expected to be much faster than another one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4qu25gb4uoy5985m06r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4qu25gb4uoy5985m06r.png" alt="Graphs for O(log n) and O(√n)" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Analyzing an algorithm's complexity growth rate as the input size increases is asymptotic analysis. There are two practical perspectives for the Big O:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;asymptotic&lt;/strong&gt; &lt;em&gt;(infinite-N behavior)&lt;/em&gt; - basically what is represented in documentation for most algorithms &lt;em&gt;(e.g. quicksort is O(n log n))&lt;/em&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pre-asymptotic&lt;/strong&gt; &lt;em&gt;(finite-N behavior)&lt;/em&gt; - rarely documented (because &lt;em&gt;context&lt;/em&gt; matters: &lt;em&gt;which&lt;/em&gt; algorithms are being compared, and &lt;em&gt;which&lt;/em&gt; N-range is relevant), although it can be useful when comparing algorithms on specific datasets or limited input sizes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Asymptotically speaking &lt;em&gt;(analysis of a function in the limit as &lt;code&gt;N → ∞&lt;/code&gt;)&lt;/em&gt;, the growth of function A can be faster than the growth of function B - this is the case with all the &lt;em&gt;"standard"&lt;/em&gt; growth rates &lt;em&gt;(complexity classes)&lt;/em&gt; usually compared on graphs.&lt;/p&gt;

&lt;p&gt;But what would happen when the algorithm has constants? Well, here is where K-factor kicks in, as well as pre-asymptotics!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I’m using the old K-factor naming here - a little habit I carried from early C/C++ and embeded style guides, where K was the go-to marker (prefix) for constants like &lt;code&gt;kConstantValue&lt;/code&gt;.&lt;br&gt;
Also, using "C-factor" would sound confusingly close to the C programming language itself, so sticking with the traditional "K".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's take a look on O(√n) and O(log n) again, the same picture as before &lt;em&gt;(apart from the negative time complexity for logarithms where &lt;code&gt;N &amp;lt; 1&lt;/code&gt; - this is impractical for algorithms, but a function graph is a function graph&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmuxgm6ai1p7fiv82ian.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmuxgm6ai1p7fiv82ian.png" alt="Graphs for O(log n) and O(√n) in scale" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can clearly see that O(log n) is unconditionally &lt;strong&gt;"faster"&lt;/strong&gt; (grows more slowly asymptotically), which is great. However, if there were some factor that affects it with a small constant, let’s say &lt;code&gt;K = 2&lt;/code&gt;, then we would see a very different picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nuugmgpowpn1d3keojg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nuugmgpowpn1d3keojg.png" alt="Graphs for O(log n) and O(√n) cross points" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;drastically&lt;/strong&gt; changes the situation: these curves &lt;em&gt;never intersect&lt;/em&gt; without the constant factor. Now, on the scale where &lt;code&gt;N &amp;lt; 75&lt;/code&gt;, there are &lt;strong&gt;two&lt;/strong&gt; &lt;em&gt;cross-points&lt;/em&gt;, at approximately &lt;code&gt;N ≈ 2&lt;/code&gt; and &lt;code&gt;N ≈ 74&lt;/code&gt;.&lt;br&gt;
Therefore, in theory, on the range &lt;code&gt;2 &amp;lt; N &amp;lt; 74&lt;/code&gt;, the O(√n) algorithm would be more efficient than O(2 log n), regardless of the asymptotic analysis.&lt;br&gt;
&lt;strong&gt;This is pre-asymptotic behavior.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The constant is usually greater than 2, as it may depend on various extra comparisons or operations inside the algorithm that improve it asymptotically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Asymptotically, &lt;strong&gt;the O(2 log n) is equal to O(log n)&lt;/strong&gt; &lt;em&gt;(have same growth rate = belong to the same class; this is one-way equation)&lt;/em&gt;, since constants are not relevant in Big O analysis.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Such one-way equations (the naming is suggested by Donald Knuth), are basically an abuse of notation, but widely used and contextually &amp;amp; semantically clear. The more correct notation would be &lt;em&gt;2 log n ∈ O(log n)&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From the Big O perspective: &lt;strong&gt;constants never move a function to a different complexity class - ever&lt;/strong&gt;. And notation O(2 log n) &lt;em&gt;is incorrect&lt;/em&gt; (extra constants are redundant asymptotically), but it is well-understandable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As mentioned earlier, O(log₂ n) and O(log₁₀ n) &lt;strong&gt;may differ in&lt;/strong&gt; actual &lt;strong&gt;runtime&lt;/strong&gt; but belong to the &lt;strong&gt;same complexity class&lt;/strong&gt;.&lt;br&gt;
This is exactly why, in asymptotic analysis, all logarithms are collapsed into the generic O(log n).&lt;br&gt;
In the infinite-N limit, their differences reduce to constant factors, and constants disappear. &lt;em&gt;This is why documentation almost always writes O(log n), even when the underlying behavior is specifically O(log₂ n), etc.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On the large scale it might not matter, but in applied programming it is actually important - especially in performance management.&lt;/p&gt;

&lt;p&gt;In practice, knowing how high the N-scale is expected on average (or consistently) can significantly save precious CPU cycles, even if this is counter-intuitive from the Big O perspective. Even a subtle complexity difference can play its role.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This particular example is for &lt;strong&gt;demonstration only&lt;/strong&gt;; in practice there are no pure O(√n) algorithms. &lt;em&gt;Some constant&lt;/em&gt; is always present, so it never behaves as a &lt;em&gt;"real"&lt;/em&gt; O(√n).&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Zooming too close
&lt;/h2&gt;

&lt;p&gt;Let's zoom in, maybe too much, into the region where N &amp;lt; 10.&lt;br&gt;
At this scale, the complexity curves behave very differently from what could be expected. Some functions dip below zero (for example, O(log n) becomes negative for &lt;code&gt;0 &amp;lt; n &amp;lt; 1&lt;/code&gt;), while others appear to cross each other or even briefly reverse their asymptotic complexity order. And, of course, O(log n) never has zero-cost even for small N.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbr913jtpr1qt4h04bq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbr913jtpr1qt4h04bq0.png" alt="Big O - All Graphs, zoom in" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For algorithms and Big O, &lt;strong&gt;N is always a positive integer&lt;/strong&gt;, and meaningful values begin at &lt;code&gt;N = 1&lt;/code&gt;, since it represents the &lt;em&gt;size of the input data&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;However, even though this small region is &lt;strong&gt;not practically relevant&lt;/strong&gt;, it shows a trend: the &lt;em&gt;shape of each curve&lt;/em&gt; hints at where crossover points may appear when constants (the K-factor) are introduced.&lt;/p&gt;

&lt;p&gt;If the growth rates differ only slightly (for example, O(√n) vs O(log n)), a sufficiently big constant may shift the curves so that one algorithm becomes &lt;em&gt;faster&lt;/em&gt; than the other for some realistic inputs, as we saw before. &lt;em&gt;And even more, as we are going to see.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;All pre-asymptotic behavior starts from this region. It is helpful to explore it in a graphical calculator to understand the trends better. The K-factor may be small, but experimenting with graphs makes it much easier to visualize and remember how constants shift the curves. &lt;/p&gt;
&lt;h2&gt;
  
  
  Practical example
&lt;/h2&gt;

&lt;p&gt;Let's have a look at sorting algorithms again, and &lt;em&gt;probably&lt;/em&gt; the most iconic example of them: quick sort vs. insertion sort.&lt;/p&gt;

&lt;p&gt;Quick sort has a reputation as the most optimized and the fastest sorting algorithm for a common use-case (which does not exist, of course). The actual quirk, as we saw, is that: being faster from the asymptotic POV does not mean it is faster on &lt;strong&gt;the pre-asymptotic scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's implement the quick and insertion sort in the simplest reincarnation and see &lt;strong&gt;the pre-asymptotic in action&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Insertion Sort
&lt;/h3&gt;

&lt;p&gt;The insertion sort algorithm is used in the loop-based version, not the recursive one, since the main difference between them is the extra stack frames in the recursive version. The loop version is better for a C++ use-case (not a pure functional language).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; C++ is not a pure functional language, so the compiler cannot guarantee automatic tail-call optimization. Therefore, recursive permutations create stack frames, while iterative permutations avoid this overhead.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also, the shift variation is used in this particular case rather than swap, to reduce the number of assignments.&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;algorithm&lt;/span&gt; &lt;span class="nf"&gt;insertion_sort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Insertion_sort" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; pseudo-code slightly formatted and stylized to align with the general style of the article.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Literal translation to the C++ will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;insertion_sort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or slightly shorter and cleaner without changing the flow of the algorithm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;insertion_sort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
        &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// Note: shift right, not swap!&lt;/span&gt;
            &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Note: place saved value&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; using &lt;code&gt;std::swap&lt;/code&gt; adds &lt;strong&gt;2&lt;/strong&gt; extra assignments per inner-loop step compared to shifting, because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;swap = 3 writes / per inner loop;&lt;/li&gt;
&lt;li&gt;shift = 1 write / per inner loop.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Quick Sort
&lt;/h3&gt;

&lt;p&gt;To keep things fair with respect to insertion sort, the Hoare partition scheme variation of quick sort is used.&lt;br&gt;
It is more efficient, since it performs fewer swaps. It uses recursion, but it can be used &lt;em&gt;"as-is"&lt;/em&gt; in the C++ implementation, since the average depth of the call stack is O(log n), with the worst case being O(n), while insertion sort’s call stack depth is always O(n).&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;algorithm&lt;/span&gt; &lt;span class="nf"&gt;quick_sort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
    &lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;gt&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;quicksort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lt&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;quicksort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;gt&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;algorithm&lt;/span&gt; &lt;span class="nf"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt;
  &lt;span class="nx"&gt;pivot&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;
  &lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt;
  &lt;span class="nx"&gt;gt&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;
  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nx"&gt;eq&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;gt&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;pivot&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
      &lt;span class="nx"&gt;swap&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lt&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;eq&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;pivot&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt;
      &lt;span class="nx"&gt;swap&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;gt&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;gt&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="nx"&gt;eq&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;eq&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;gt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Quicksort" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; pseudo-code slightly formatted and stylized to align with the general style of the article.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; the partition function used here is technically the &lt;em&gt;three-way partitioning&lt;/em&gt; approach (also known as the DNF partition). I originally referred to it as "Hoare partition", but strictly speaking Hoare’s classic scheme uses only two pointers and does not form an explicit middle region. The 3-way version is simply a more practical refinement often used in modern quicksorts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Direct C++ implementation has, virtually no difference from the pseudocode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kr"&gt;inline&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="nf"&gt;partition_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;pivot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;lt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;eq&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;pivot&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;swap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
      &lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nf"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;pivot&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;swap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
      &lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;make_pair&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kr"&gt;inline&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;quick_sort_impl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;partition_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;quick_sort_impl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lt&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;quick_sort_impl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;quick_sort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;empty&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;quick_sort_impl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Benchmarking
&lt;/h3&gt;

&lt;p&gt;The quick &lt;em&gt;"toy"&lt;/em&gt; benchmark, sorting over ~300 arrays with random unique values (insertion and quick sort use the same input data), shows the following trend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnf1d2bqfazg4iffqnvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnf1d2bqfazg4iffqnvw.png" alt="Quick sort vs. Insertion sort - reference lines overview" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adding the reference graphs to the picture shows the actual difference between real-life execution and the estimated Big O values. &lt;strong&gt;The difference is caused by the K-factor.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The O(n²) and O(n log n) values shown are the &lt;em&gt;average-case&lt;/em&gt; complexities of insertion sort and quick sort, respectively.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With an appropriate K-factor, the reference curves can be aligned closely with the empirical measurements. The constant is computed as &lt;code&gt;K = med(Sₘ) / med(F[Sₘ])&lt;/code&gt;, where &lt;code&gt;F[Sₘ] = { F(n) | n ∈ Sₘ }&lt;/code&gt;. The Sₘ denotes the set containing the last m empirical measurements (typically about 0.5% of the dataset is sufficient), and F is the corresponding Big O complexity function evaluated over the same subset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmti39x1yozbrc9jtkve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmti39x1yozbrc9jtkve.png" alt="Quick sort vs. Insertion sort - reference lines corrected with K-factor" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The general trend is aligned with asymptotic analysis: O(n log n) is faster than O(n²), as expected.&lt;/p&gt;

&lt;p&gt;However, if we zoom in to the scale of N &amp;lt;= 60, we can clearly see that suddenly insertion sort is faster than quick sort, and the trend has flipped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf8umg4aa3evpzonxj13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf8umg4aa3evpzonxj13.png" alt="Quick sort vs. Insertion sort - reference lines corrected with K-factor, zoom in" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although, the theoretical crossover point is expected to appear around &lt;code&gt;N ≈ 40&lt;/code&gt;, the empirical data shows it closer to &lt;code&gt;N ≈ 45&lt;/code&gt;. This small shift is completely normal and can come from things like machine-level optimizations, cache locality effects, or simply the fact that insertion sort avoids recursion.&lt;/p&gt;

&lt;p&gt;Ultimately, it doesn't change the bigger picture, but implementation details &lt;em&gt;always&lt;/em&gt; affect the runtime performance.&lt;/p&gt;




&lt;p&gt;Standard libraries are always moving in the direction of optimization. This is why so many of them &lt;strong&gt;&lt;em&gt;almost&lt;/em&gt; never&lt;/strong&gt; use pure quick sort.&lt;br&gt;
Below is a part of the quick sort implementation in &lt;code&gt;libc++&lt;/code&gt; from the &lt;strong&gt;LLVM&lt;/strong&gt; project v21.&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The main sorting function.  Implements introsort combined with other ideas:&lt;/span&gt;
&lt;span class="c1"&gt;//  - option of using block quick sort for partitioning,&lt;/span&gt;
&lt;span class="c1"&gt;//  - guarded and unguarded insertion sort for small lengths,&lt;/span&gt;
&lt;span class="c1"&gt;//  - Tuckey's ninther technique for computing the pivot,&lt;/span&gt;
&lt;span class="c1"&gt;//  - check on whether partition was not required.&lt;/span&gt;
&lt;span class="c1"&gt;// The implementation is partly based on Orson Peters' pattern-defeating&lt;/span&gt;
&lt;span class="c1"&gt;// quicksort, published at: &amp;lt;https://github.com/orlp/pdqsort&amp;gt;.&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;__introsort&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Upper bound for using insertion sort for sorting.&lt;/span&gt;
  &lt;span class="n"&gt;_LIBCPP_CONSTEXPR&lt;/span&gt; &lt;span class="n"&gt;difference_type&lt;/span&gt; &lt;span class="n"&gt;__limit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__len&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;__limit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;__insertion_sort&lt;/span&gt;&lt;span class="p"&gt;(...);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://github.com/llvm/llvm-project/blob/llvmorg-21.1.0/libcxx/include/__algorithm/sort.h#L725-L726" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The quick and insertion sort are, obviously, not the whole optimization. However, the selection logic (insertion vs. quick sort) is clearly there: &lt;em&gt;use insertion sort for collections smaller than 24 elements&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; One of the reasons why, in simple benchmarking, insertion sort wins by such a large margin vs. quick sort (~45 vs. 24) is that the example does not include challenges like cache locality, non-unique elements, or partially sorted data. Changing the dataset or the data distribution may significantly reduce the gap between insertion and quick sort.&lt;br&gt;
For a broader overview of some of these effects, check: Global Scientific Journal - &lt;a href="https://www.globalscientificjournal.com/researchpaper/Comparison_between_Quicksort_MergeSort_and_Insertion_Sort.pdf" rel="noopener noreferrer"&gt;"Comparison between Quicksort, MergeSort and Insertion Sort"&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;But why is insertion sort faster than quick sort? Well, this is not purely an asymptotic issue. There are other reasons for this, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no recursion (as mentioned in the algorithm variation description);&lt;/li&gt;
&lt;li&gt;contiguous memory;&lt;/li&gt;
&lt;li&gt;extremely cache-friendly behavior;&lt;/li&gt;
&lt;li&gt;stable branch prediction;&lt;/li&gt;
&lt;li&gt;fewer instructions;&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The list of reasons why algorithm A can be &lt;strong&gt;faster&lt;/strong&gt; than algorithm B is &lt;strong&gt;not limited&lt;/strong&gt; &lt;em&gt;to asymptotic/pre-asymptotic&lt;/em&gt; behavior, and is &lt;strong&gt;also influenced&lt;/strong&gt; by the actual &lt;em&gt;physical manifestation of the algorithm&lt;/em&gt; on specific hardware.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;The Big O can evaluate the complexity growth rate for any algorithm used in code. However, the asymptotic essence of documented algorithms does not allow to be sure which algorithm will be &lt;strong&gt;"faster"&lt;/strong&gt; at a specific scale.&lt;/p&gt;

&lt;p&gt;Standard library algorithms are usually optimized very well. But when an algorithm is custom-written for a specific application, not only pre-asymptotic behavior but also hardware optimization levels, CPU characteristics, and instruction patterns may affect the performance management pretty much.&lt;/p&gt;

&lt;p&gt;The best approach is not only to understand the K-factor of the algorithm, but also to understand where this algorithm is applied. This includes, but is not limited to, knowing the median/average/edge-case ranges of the N-scale.&lt;/p&gt;

&lt;p&gt;But what is a &lt;em&gt;"&lt;strong&gt;big&lt;/strong&gt; enough"&lt;/em&gt; &lt;strong&gt;N&lt;/strong&gt;? When will the asymptotics show the difference according to the Big O estimation? This purely depends on the specifics of the algorithm, the data distribution, and many other factors. As with the "quick/insertion sort" issue, the cross-point could be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;below &lt;code&gt;N ≈ 45&lt;/code&gt; in a simple benchmark;&lt;/li&gt;
&lt;li&gt;below &lt;code&gt;N ≈ 24&lt;/code&gt; in the standard C++ library implementation;&lt;/li&gt;
&lt;li&gt;above 10ᵐ, depending on the dataset, cache locality, machine code, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The actual &lt;strong&gt;cross-point may vary noticeably&lt;/strong&gt; - &lt;em&gt;sometimes by several orders of magnitude&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Remember, Big O is neither a performance factor nor actual runtime. It is only the rate of complexity growth, &lt;strong&gt;&lt;em&gt;always&lt;/em&gt; specified in the asymptotic sense&lt;/strong&gt; &lt;em&gt;(not from the pre-asymptotic POV)&lt;/em&gt;. This requires in-depth analysis during optimization and may give relatively easy performance wins.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>performance</category>
      <category>programming</category>
    </item>
    <item>
      <title>Optional Is Not Optional in C++ (and Definitely Not a Pointer)</title>
      <dc:creator>Kyrylo Rud</dc:creator>
      <pubDate>Wed, 05 Nov 2025 17:59:25 +0000</pubDate>
      <link>https://forem.com/kyrylorud/optional-is-not-optional-in-c-and-definitely-not-a-pointer-34jp</link>
      <guid>https://forem.com/kyrylorud/optional-is-not-optional-in-c-and-definitely-not-a-pointer-34jp</guid>
      <description>&lt;p&gt;Last week I had a chat with a few fellow C++ devs, and I was genuinely surprised; some still avoid using &lt;code&gt;std::optional&lt;/code&gt; and &lt;code&gt;std::expected&lt;/code&gt;. Reason?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This adds extra indirects while access as well as memory allocation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Basically, there is a belief that they are implemented over pointers. It still pops up more often than you could guess, but it's simply &lt;strong&gt;wrong&lt;/strong&gt;.&lt;br&gt;
Neither &lt;code&gt;std::optional&lt;/code&gt; nor &lt;code&gt;std::expected&lt;/code&gt; use pointers or heap allocations to store data. Everything lives on the stack - lightweight, fast, and predictable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Let's finally put that myth to rest.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;The &lt;code&gt;std::optional&amp;lt;T&amp;gt;&lt;/code&gt; is just a lightweight wrapper around a value of type &lt;code&gt;T&lt;/code&gt;, plus a status flag that tells whether it's initialized. No heap or pointer usage at all; conceptually, it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;template&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typename&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;optional&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="n"&gt;is_initialized_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;typename&lt;/span&gt; &lt;span class="n"&gt;aligned_storage&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;alignof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;::&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt; &lt;span class="n"&gt;storage_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When it is assigned or constructed, the object is placed &lt;strong&gt;in place&lt;/strong&gt; inside storage. If it's empty, the storage just stays uninitialized.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Optional does not allocate memory. So it can do without allocators.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3672.html#open_questions.allocator" rel="noopener noreferrer"&gt;Standard says.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;The &lt;code&gt;std::expected&amp;lt;T, E&amp;gt;&lt;/code&gt; is very similar in spirit - a lightweight value wrapper, but with additional functionality. Instead of just saying &lt;em&gt;"has value or not"&lt;/em&gt;, it holds either a value of type &lt;code&gt;T&lt;/code&gt; &lt;strong&gt;or&lt;/strong&gt; an error of type &lt;code&gt;E&lt;/code&gt;.&lt;br&gt;
Again, no heap or pointer usage - just a union that lives on the stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;E&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;expected&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="n"&gt;has_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;union&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;E&lt;/span&gt; &lt;span class="n"&gt;unex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you construct it, either &lt;code&gt;val&lt;/code&gt; or &lt;code&gt;unex&lt;/code&gt; is placed &lt;strong&gt;in place&lt;/strong&gt; inside the union. Switching between value and error only toggles which part of the union is active.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;std::expected&lt;/code&gt; also has another specification for &lt;code&gt;void&lt;/code&gt; type; however, this is not relevant in this context and essentially goes pretty close to the &lt;code&gt;std::optional&lt;/code&gt; implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;E&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;expected&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="n"&gt;has_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;union&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;E&lt;/span&gt; &lt;span class="n"&gt;unex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Any object of &lt;code&gt;expected&amp;lt;T, E&amp;gt;&lt;/code&gt; either contains a value of type &lt;code&gt;T&lt;/code&gt; or a value of type &lt;code&gt;E&lt;/code&gt; within its own storage.  &lt;strong&gt;Implementations are not permitted to use additional storage, such as dynamic memory&lt;/strong&gt;, to allocate the object of type &lt;code&gt;T&lt;/code&gt; or the object of type &lt;code&gt;E&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p0323r11.html#expected.expected" rel="noopener noreferrer"&gt;Standard says.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;Let's summarize - the &lt;code&gt;std::optional&lt;/code&gt; and the &lt;code&gt;std::expected&lt;/code&gt; are &lt;strong&gt;stack-based, safe, and cheap&lt;/strong&gt; - no hidden allocations.&lt;br&gt;
They behave like any other &lt;strong&gt;value wrapper&lt;/strong&gt;, just adding a status flag (normally 1 byte, rounded by padding).&lt;/p&gt;

&lt;p&gt;Use them instead of raw pointers or C-style sentinels.&lt;br&gt;
They're modern C++ tools - &lt;strong&gt;fast, expressive, and made for exactly this job.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Do &lt;strong&gt;NOT&lt;/strong&gt; reinvent them with pointers;&lt;br&gt;
Do &lt;strong&gt;NOT&lt;/strong&gt; avoid them thinking they are one;&lt;br&gt;
Do &lt;strong&gt;USE&lt;/strong&gt; them if you need this functionality.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Historical reference, or at least my best guess, what was the origin of this everlasting myth, which finds us even after several new standards after the &lt;code&gt;std::optional&lt;/code&gt; introduction.&lt;/p&gt;

&lt;p&gt;I was curious what actually creates these misconceptions. Simple answer: &lt;em&gt;we used pointers and NULL as std::nullopt in C code&lt;/em&gt; - didn't cut it for me.&lt;/p&gt;

&lt;p&gt;During research, I found several articles and comments that summarize to &lt;code&gt;pointers == optional&lt;/code&gt; even in the context of C++ and here are a few examples:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What makes an optional an optional?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It either contains a value or nothing.&lt;/li&gt;
&lt;li&gt;We can query an optional whether it contains a value or not.&lt;/li&gt;
&lt;li&gt;We can retrieve the value if there is any.
Turns out: a ordinary pointer just like in C has exactly these properties
...
pointers and optionals are quite similar&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://optionalcpp.com/2022/07/17/pointer-syntax.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also another one:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Rule of thumb (for me) is pointer types are nullable and therefore inherently 'optional'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://lemire.me/blog/2023/01/12/care-is-needed-to-use-c-stdoptional-with-non-trivial-objects/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Quick note regarding these quotes:&lt;br&gt;
&lt;em&gt;I trimmed some context from these quotes but tried not to distort their meaning. My main concern here - comparing pointers and optionals is fundamentally incorrect. &lt;strong&gt;The fact that you can use them for similar tasks doesn't make a mallet out of a microscope.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;nullptr&lt;/code&gt; is just a convention meaning &lt;em&gt;"points to nothing"&lt;/em&gt;, but that &lt;strong&gt;"nothing"&lt;/strong&gt; is still something - which is why &lt;code&gt;std::optional&lt;/code&gt; is a more precise and safer way to express absence.&lt;/p&gt;

&lt;p&gt;Then I stumbled upon this C++ standards paper, and I think I finally saw where all the myths came from - beyond just "similar functionality."&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Julius Caesar:&lt;/strong&gt; "Et tu, Boost?"&lt;br&gt;
&lt;strong&gt;Boost:&lt;/strong&gt; "Not betrayal - legacy."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;44 BC (~ -6.3 × 10¹⁰ Unix time)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a comparison between &lt;code&gt;boost::optional&lt;/code&gt; and the &lt;code&gt;std::optional&lt;/code&gt; proposal, there's a line that says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"explicit convert to ptr - no [proposal fror std], yes [boost::optional]"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3672.html#comparison_with_boost" rel="noopener noreferrer"&gt;Standard says.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;In &lt;code&gt;boost::optional&lt;/code&gt; there's a method documented as: &lt;em&gt;"&lt;strong&gt;Returns a pointer&lt;/strong&gt; to the value if this is initialized, otherwise, returns NULL."&lt;/em&gt;&lt;br&gt;
This method is (are):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;get_pointer&lt;/code&gt; method in v1.30 (introduction of &lt;code&gt;boost::optional&lt;/code&gt;);&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;get_ptr&lt;/code&gt; method since v1.31 until nowadays.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Unfortunately&lt;/em&gt;, there's no paper or rationale explaining &lt;em&gt;why&lt;/em&gt; fetching the value by pointer was considered necessary.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My guess is&lt;/em&gt; this ties directly to C-like design habits that still influence modern C++ developers through these myths. Boost's impact on C++ is hard to overstate - and those early interfaces continue to shape misunderstandings about &lt;code&gt;std::optional&lt;/code&gt; and &lt;code&gt;std::expected&lt;/code&gt;, often leading people to avoid them or, worse, recreate unsafe patterns from the past.&lt;/p&gt;

&lt;p&gt;To my knowledge, none of the popular C++ libraries has ever implemented an &lt;code&gt;optional&lt;/code&gt; utilizing pointers internally. However, the introduction of this rather confusing &lt;code&gt;get_ptr&lt;/code&gt; by &lt;code&gt;boost::optional&lt;/code&gt; could mislead some developers. Especially, while at the time of widelly used pointers to provide nulling values in C code-bases.&lt;/p&gt;

&lt;p&gt;The naming alone was more than enough to reinforce the illusion of pointer-based implementation and keep this myth alive until today.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;P.S.&lt;/strong&gt; In modern C++, both &lt;code&gt;std::optional&lt;/code&gt; and &lt;code&gt;std::expected&lt;/code&gt; provide the indirection (&lt;code&gt;operator*&lt;/code&gt;) and member access (&lt;code&gt;operator-&amp;gt;&lt;/code&gt;) operators to access their contained values. These operators express generalized value access rather than raw pointer semantics, making them a clearer and more natural choice than the old &lt;code&gt;boost::optional::get_ptr()&lt;/code&gt; method.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>moderncpp</category>
      <category>programming</category>
      <category>coding</category>
    </item>
  </channel>
</rss>
