<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: muhammed shahid</title>
    <description>The latest articles on Forem by muhammed shahid (@muhammed_shahid_d7f50e64c).</description>
    <link>https://forem.com/muhammed_shahid_d7f50e64c</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/muhammed_shahid_d7f50e64c"/>
    <language>en</language>
    <item>
      <title>How to Computes CLAHE Parameters Dynamically for Every Image.</title>
      <dc:creator>muhammed shahid</dc:creator>
      <pubDate>Thu, 30 Apr 2026 16:07:57 +0000</pubDate>
      <link>https://forem.com/muhammed_shahid_d7f50e64c/how-to-computes-clahe-parameters-dynamically-for-every-image-peo</link>
      <guid>https://forem.com/muhammed_shahid_d7f50e64c/how-to-computes-clahe-parameters-dynamically-for-every-image-peo</guid>
      <description>&lt;p&gt;&lt;em&gt;No sliders. No presets. Just math that listens to your image.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you've ever used CLAHE (Contrast Limited Adaptive Histogram Equalization) in OpenCV, you've probably written something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;clahe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCLAHE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clipLimit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tileGridSize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then spent the next hour tweaking those two magic numbers until the result looked "good enough."&lt;/p&gt;

&lt;p&gt;PACE doesn't do that.&lt;/p&gt;

&lt;p&gt;PACE is a perceptual image enhancement pipeline I've been building that analyses each image's statistical fingerprint and derives its own &lt;code&gt;clipLimit&lt;/code&gt; and &lt;code&gt;tileSize&lt;/code&gt; — before a single pixel of CLAHE is applied. This post is a deep dive into exactly how that works.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Parameters That Define CLAHE
&lt;/h2&gt;

&lt;p&gt;Before getting into the adaptive logic, let's establish what these parameters actually control:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;clipLimit&lt;/code&gt;&lt;/strong&gt; — The clipping threshold applied to each tile's histogram before redistribution. Higher values allow more aggressive contrast stretching. Lower values keep the enhancement conservative. Set it too high and you get halos and noise amplification. Too low and you might as well not bother.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;tileSize&lt;/code&gt;&lt;/strong&gt; — The spatial granularity of the local enhancement. Smaller tiles capture fine local structure but risk over-enhancing noise. Larger tiles are smoother but might miss subtle detail.&lt;/p&gt;

&lt;p&gt;The challenge: the "right" value for both of these is &lt;em&gt;image-dependent&lt;/em&gt;. A low-key portrait shot needs very different treatment than a high-contrast landscape or a noisy night photograph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Reading the Image
&lt;/h2&gt;

&lt;p&gt;PACE starts by extracting a statistical fingerprint from the luminance channel (after converting to OKLab). This happens in &lt;code&gt;extractGlobalFeatures.js&lt;/code&gt; and produces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;distribution&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c1"&gt;// average luminance&lt;/span&gt;
    &lt;span class="nx"&gt;variance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;// spread of tones&lt;/span&gt;
    &lt;span class="nx"&gt;entropy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;// information density (normalized to [0,1])&lt;/span&gt;
    &lt;span class="nx"&gt;dynamicRange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// p95 - p5 percentile range&lt;/span&gt;
    &lt;span class="nx"&gt;shadowRatio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;// fraction of pixels below 0.2&lt;/span&gt;
    &lt;span class="nx"&gt;highlightRatio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// fraction of pixels above 0.8&lt;/span&gt;
    &lt;span class="nx"&gt;skewness&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;kurtosis&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;structure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;edgeDensity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;// noise-adjusted, soft-normalised edge signal&lt;/span&gt;
    &lt;span class="nx"&gt;textureIndex&lt;/span&gt;    &lt;span class="c1"&gt;// gradient variance / mean gradient&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;noise&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;// local deviation vs global variance&lt;/span&gt;
    &lt;span class="nx"&gt;microContrast&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't just metadata. These values are the inputs to a small analytical engine that computes the CLAHE parameters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Computing &lt;code&gt;clipLimit&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Inside &lt;code&gt;computeControlParams()&lt;/code&gt;, the clip limit is derived from a single intermediate called &lt;code&gt;structureConfidence&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a signal-to-noise ratio for spatial structure. It asks: &lt;em&gt;how much of the gradient energy in this image is real structure, as opposed to noise?&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sharp, textured image (high &lt;code&gt;edgeDensity&lt;/code&gt;, low &lt;code&gt;noiseRatio&lt;/code&gt;) → high &lt;code&gt;structureConfidence&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A soft or noisy image → low &lt;code&gt;structureConfidence&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clipLimit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.02&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.08&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This maps &lt;code&gt;structureConfidence ∈ [0, 1]&lt;/code&gt; to &lt;code&gt;clipLimit ∈ [0.02, 0.10]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this makes sense:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When structure confidence is high, the image genuinely has strong local contrast variation that CLAHE should bring out. A higher clip limit allows the histogram redistribution to be more aggressive — safe to do because the edges are real.&lt;/p&gt;

&lt;p&gt;When the image is noisy or flat, a conservative clip limit prevents the CLAHE from amplifying noise into artificial "texture." The floor of &lt;code&gt;0.02&lt;/code&gt; ensures some enhancement always happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Computing &lt;code&gt;tileSize&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The tile size logic introduces one more intermediate — &lt;code&gt;granularity&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;granularity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This refines the structure signal by penalising noise more directly. Even if &lt;code&gt;edgeDensity&lt;/code&gt; is high, if a significant chunk of it is noise-driven, we want larger tiles.&lt;/p&gt;

&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;granularity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Reading this:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;code&gt;granularity&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;Raw &lt;code&gt;tileSize&lt;/code&gt;
&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0 (sharp, clean)&lt;/td&gt;
&lt;td&gt;16px&lt;/td&gt;
&lt;td&gt;Fine-grained — trust local structure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.5 (balanced)&lt;/td&gt;
&lt;td&gt;24px&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0.0 (flat/noisy)&lt;/td&gt;
&lt;td&gt;32px&lt;/td&gt;
&lt;td&gt;Coarser — smooth out noise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-0.5 (very noisy)&lt;/td&gt;
&lt;td&gt;40px&lt;/td&gt;
&lt;td&gt;Conservative&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The clamp to &lt;code&gt;[8, 64]&lt;/code&gt; prevents pathological cases. The &lt;code&gt;Math.round(.../ 8) * 8&lt;/code&gt; snaps to multiples of 8 — a practical optimisation since tile boundaries align better with typical image dimensions and cache lines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full &lt;code&gt;computeControlParams&lt;/code&gt; Function
&lt;/h2&gt;

&lt;p&gt;Here's the complete logic, stripped down:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;computeControlParams&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;entropy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;dynamicRange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;shadowRatio&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;highlightRatio&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distribution&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;structure&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;features&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;noise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// How much contrast improvement does this image need?&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contrastNeed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;entropy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;dynamicRange&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// How much can we trust the edge signal?&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Tonal imbalance (is one end of the histogram overloaded?)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imbalance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;shadowRatio&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;highlightRatio&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// --- Global Alpha (blending weight) ---&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;alphaRaw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;imbalance&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="mf"&gt;0.3&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;contrastNeed&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="mf"&gt;0.4&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;globalAlpha&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;alphaRaw&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;alphaRaw&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// --- Tile Size ---&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;granularity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;granularity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// --- Clip Limit ---&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clipLimit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.02&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.08&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;globalAlpha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;clipLimit&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that &lt;code&gt;globalAlpha&lt;/code&gt; — which controls how strongly the CLAHE result is blended into the final output — is computed from the &lt;em&gt;same&lt;/em&gt; intermediates. The whole parameter set is internally consistent. An image that earns a high &lt;code&gt;structureConfidence&lt;/code&gt; gets a higher clip limit &lt;em&gt;and&lt;/em&gt; a finer tile size &lt;em&gt;and&lt;/em&gt; a stronger blend weight. The enhancement scales coherently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: The CLAHE Implementation
&lt;/h2&gt;

&lt;p&gt;PACE implements CLAHE from scratch in &lt;code&gt;CLAHE.js&lt;/code&gt;. The core is a two-phase process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 — Build tile LUTs:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;buildTileLUTs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;gray&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tileSize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;clipLimit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// For each tile:&lt;/span&gt;
  &lt;span class="c1"&gt;// 1. Build a 256-bin histogram&lt;/span&gt;
  &lt;span class="c1"&gt;// 2. Clip the histogram at: Math.floor(clipLimit * tileArea)&lt;/span&gt;
  &lt;span class="c1"&gt;// 3. Redistribute the clipped excess uniformly across all bins&lt;/span&gt;
  &lt;span class="c1"&gt;// 4. Compute CDF → LUT mapping [0..255] → [0..255]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The clip threshold is computed as &lt;code&gt;clipLimit * tileArea&lt;/code&gt; — so the same &lt;code&gt;clipLimit&lt;/code&gt; value naturally scales with tile size. A 16×16 tile has 256 pixels; a 32×32 tile has 1024. The clip is proportional, not absolute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 — Bilinear interpolation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rather than applying one tile's LUT per pixel (which creates visible block boundaries), PACE maps each pixel into the tile grid and bilinearly interpolates between the four surrounding tile LUTs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;v_tl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;luts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ty&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;v_tr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;luts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ty&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;tx1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;v_bl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;luts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ty1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;v_br&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;luts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ty1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;tx1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;g&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nx"&gt;out&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;w&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="nx"&gt;v_tl&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;fx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;fy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
  &lt;span class="nx"&gt;v_tr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;fx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;fy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
  &lt;span class="nx"&gt;v_bl&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;fx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;fy&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
  &lt;span class="nx"&gt;v_br&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;fx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;fy&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the standard CLAHE bilinear interpolation scheme. The output is smooth regardless of tile boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Worked Example
&lt;/h2&gt;

&lt;p&gt;Say you feed PACE a &lt;strong&gt;foggy landscape&lt;/strong&gt; — low contrast, flat histogram, minimal noise:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;entropy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.85 (information-rich despite low contrast)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dynamicRange&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.3 (compressed tones)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;edgeDensity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.25 (soft, diffuse edges)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;noiseRatio&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.1 (relatively clean)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Computing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;structureConfidence = 0.25 / (1 + 0.1) ≈ 0.23&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clipLimit = 0.02 + 0.08 × 0.23 ≈ 0.038&lt;/code&gt; → conservative stretching&lt;/li&gt;
&lt;li&gt;&lt;code&gt;granularity = 0.23 - 0.5 × 0.1 = 0.18&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tileSize = 32 - 16 × 0.18 ≈ 29 → rounded to 32&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now feed it a &lt;strong&gt;sharp architectural photo&lt;/strong&gt; — strong edges, clean capture:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;edgeDensity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;noiseRatio&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.05&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Computing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;structureConfidence = 0.72 / 1.05 ≈ 0.69&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clipLimit = 0.02 + 0.08 × 0.69 ≈ 0.075&lt;/code&gt; → more aggressive&lt;/li&gt;
&lt;li&gt;&lt;code&gt;granularity = 0.69 - 0.025 = 0.665&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tileSize = 32 - 16 × 0.665 ≈ 21 → rounded to 24&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture shot gets a higher clip limit (more contrast punch) and a smaller tile size (finer local detail). The fog shot gets conservative treatment with larger tiles that smooth over the low-signal regions. No manual tuning. Just the image telling PACE what it needs.&lt;/p&gt;

&lt;p&gt;Here is the visual as well as quantitative comparison between original image(input) and CLAHE image(output)&lt;/p&gt;

&lt;h4&gt;
  
  
  Visual comparison
&lt;/h4&gt;

&lt;p&gt;Fig. 1&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6rc0wuhptp8nufqlinh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6rc0wuhptp8nufqlinh.png" alt="cheeta" width="800" height="271"&gt;&lt;/a&gt;&lt;br&gt;
Fig. 2&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrr44o8ezqria0bwbz8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrr44o8ezqria0bwbz8m.png" alt="chest x-ray" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Quantitative comparison
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx4vbndmsdqhcpvy8r8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx4vbndmsdqhcpvy8r8h.png" alt="Histogram analysis" width="646" height="535"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Not Just Use a Global Histogram Equalization?
&lt;/h2&gt;

&lt;p&gt;Because global equalization is blind to spatial structure. It will happily blow out a bright sky to recover shadow detail in a corner, creating unnatural tonal shifts. CLAHE's tile-based approach means the histogram is equalized &lt;em&gt;locally&lt;/em&gt;, so regions with different tonal characteristics are treated independently.&lt;/p&gt;

&lt;p&gt;But static CLAHE parameters mean you're still guessing — just at a coarser level. PACE's adaptive approach closes the loop: the image's own statistics define the operating point for CLAHE, which then feeds into a broader perceptual blending pipeline (Retinex detail, Laplacian texture, edge gain, halo suppression) that further sculpts the final output.&lt;/p&gt;
&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The current &lt;code&gt;structureConfidence&lt;/code&gt; formulation is relatively simple — a ratio of edge density to noise. There's room to refine this further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the &lt;code&gt;textureIndex&lt;/code&gt; (gradient variance / mean gradient) to distinguish between fine texture and strong structural edges&lt;/li&gt;
&lt;li&gt;Feeding &lt;code&gt;kurtosis&lt;/code&gt; into the clip limit to handle bimodal histograms (silhouette shots, backlit subjects)&lt;/li&gt;
&lt;li&gt;A per-channel confidence estimate rather than a single luminance-based signal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;structureConfidence&lt;/code&gt; file already hints at a more sophisticated version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;computeStructureConfidence&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;edgeDensity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;edgeSignal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;3.0&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;noiseSuppression&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;2.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.55&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;baseline&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;baseline&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeSignal&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseSuppression&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This exponential formulation gives a softer, more robust curve than the current linear ratio — less sensitive to extreme values of &lt;code&gt;edgeDensity&lt;/code&gt;. It's on the roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;The key insight driving all of this is that CLAHE parameters aren't free variables — they're &lt;em&gt;functions&lt;/em&gt; of image structure. Edge density, noise ratio, and tonal distribution each contain actionable information about how aggressively the enhancement should operate and at what spatial scale.&lt;/p&gt;

&lt;p&gt;PACE makes this explicit: extract the features, compute the parameters analytically, apply consistently. The result is an enhancement pipeline that adapts to each image without needing a human in the loop to dial in settings.&lt;/p&gt;

&lt;p&gt;If you're building image processing tools and you're still hardcoding clip limits, try deriving them. Your images will tell you what they need.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PACE is an open research project. The full source including the adaptive parameter controller, CLAHE implementation, and perceptual blending pipeline is on &lt;a href="https://github.com/muhammedshahid/pace/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>imageprocessing</category>
      <category>computervision</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>I Built a Zero-Parameter Image Enhancement Pipeline — Here's How It Works</title>
      <dc:creator>muhammed shahid</dc:creator>
      <pubDate>Tue, 28 Apr 2026 16:52:40 +0000</pubDate>
      <link>https://forem.com/muhammed_shahid_d7f50e64c/i-built-a-zero-parameter-image-enhancement-pipeline-heres-how-it-works-38dp</link>
      <guid>https://forem.com/muhammed_shahid_d7f50e64c/i-built-a-zero-parameter-image-enhancement-pipeline-heres-how-it-works-38dp</guid>
      <description>&lt;p&gt;Most image enhancement pipelines have a dirty secret: they need you to tune them.&lt;br&gt;
Clip limit too high and CLAHE halos. Strength too aggressive and skin looks plastic.&lt;br&gt;
Tile size wrong for your image and you get block artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PACE (Perceptual Adaptive Contrast Enhancement)&lt;/strong&gt; is my attempt to fix that.&lt;br&gt;
It analyzes the image, derives every enhancement parameter from its own statistics, and enhances it — without a single slider you need to touch.&lt;/p&gt;

&lt;p&gt;This article is for people who care about &lt;em&gt;why&lt;/em&gt; the math works, not just that it does.&lt;br&gt;
If you're a web dev or just want to see what it can do, I'll be writing follow-up articles for you — but start here if you want the full picture.&lt;/p&gt;
&lt;h3&gt;
  
  
  🚀 Try Live Demo
&lt;/h3&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://muhammedshahid.github.io/pace/src/" rel="noopener noreferrer"&gt;Open Demo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The core problem with "just use CLAHE"
&lt;/h2&gt;

&lt;p&gt;CLAHE is everywhere, and for good reason — it's fast, effective, and well-understood.&lt;br&gt;
But it has two parameters that matter enormously: &lt;code&gt;tileSize&lt;/code&gt; and &lt;code&gt;clipLimit&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;clipLimit&lt;/code&gt; of 2.0 is a rule of thumb. It has nothing to do with your image.&lt;br&gt;
A &lt;code&gt;tileSize&lt;/code&gt; of 8 might be perfect for a high-frequency texture scene and completely wrong for a portrait. And that's before you layer in any post-processing — sharpening, tone mapping, detail recovery.&lt;/p&gt;

&lt;p&gt;The standard answer is "tune it per image." That's fine for a research dataset you've seen before. It breaks down the moment you're processing arbitrary inputs.&lt;/p&gt;

&lt;p&gt;PACE asks a different question: &lt;strong&gt;what does the image itself say it needs?&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The pipeline at a glance
&lt;/h2&gt;

&lt;p&gt;PACE runs seven stages in sequence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────────────────────────────────────┐
│              Input: RGB Image                │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 1. Color Space Transformation                │
│    (RGB → OKLab)                             │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 2. Global Perceptual Analysis                │
│    (Distribution, Structure, Noise)          │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 3. Adaptive Parameter Estimation             │
│    (α, λ, β, τ, tileSize, clipLimit)         │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 4. Local Contrast Enhancement CLAHE          │
│    (adaptive tileSize, clipLimit)            │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 5. Control Map Synthesis                     │
│    (Edge, Structure, Skin, Alpha Maps)       │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 6. Perceptual Fusion                         │
│    (CLAHE + Retinex + Laplacian (nonlinear)) │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│ 7. Inverse Transformation                    │
│    (OKLab → RGB)                             │
└──────────────────────────────────────────────┘
                     ↓
┌──────────────────────────────────────────────┐
│          Output: Enhanced Image              │
└──────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every parameter used in stages 4–6 is computed in stages 2–3.&lt;br&gt;
There is no configuration file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why OKLab instead of sRGB or Lab?
&lt;/h2&gt;

&lt;p&gt;The pipeline works entirely on the luminance (L) channel of OKLab, leaving the chroma planes (a, b) untouched.&lt;/p&gt;

&lt;p&gt;OKLab is perceptually uniform in a way that CIE Lab is not in practice — a Euclidean distance of 0.05 in OKLab corresponds to roughly the same perceived difference regardless of hue. This matters because the blending math in stage 6 adds deltas to L directly. In sRGB those deltas would cause perceptually inconsistent results across the tonal range: aggressive in shadows, gentle in highlights. In OKLab, the math has consistent perceptual meaning everywhere.&lt;/p&gt;

&lt;p&gt;The conversion is LUT-accelerated: a 256-entry table for sRGB→linear, and a 4096-entry table for the cube root (&lt;code&gt;cbrt&lt;/code&gt;) needed in the OKLab transform. At the inner loop scale of a 12-megapixel image, those two lookups save meaningful time over calling &lt;code&gt;Math.cbrt()&lt;/code&gt; on every pixel.&lt;/p&gt;
&lt;h2&gt;
  
  
  Stage 2: Reading the image
&lt;/h2&gt;

&lt;p&gt;Before any enhancement, PACE does a full statistical read of the luminance plane. Three feature groups:&lt;/p&gt;
&lt;h3&gt;
  
  
  Distribution features
&lt;/h3&gt;

&lt;p&gt;A 512-bin histogram gives us mean, variance, skewness, kurtosis, and Shannon entropy. Dynamic range is computed as the p95–p5 spread (robust to outliers at either tail). Shadow and highlight ratios count pixels below 0.2 and above 0.8 respectively.&lt;/p&gt;

&lt;p&gt;Entropy is normalized by &lt;code&gt;log2(512)&lt;/code&gt; so it sits in [0, 1] regardless of bin count.&lt;/p&gt;
&lt;h3&gt;
  
  
  Structure features
&lt;/h3&gt;

&lt;p&gt;Gradient magnitude uses the Alpha-Max + Beta-Min approximation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;grad&lt;/span&gt; &lt;span class="err"&gt;≈&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;gx&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;gy&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.25&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;gx&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;gy&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This trades a small amount of accuracy for a significant speedup — no square root, no multiply. Edge density is then noise-adjusted before it leaves this stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;edgeDensity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawEdge&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;adjusted&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The soft normalization in the denominator prevents high-texture images from saturating the density estimate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Noise features
&lt;/h3&gt;

&lt;p&gt;Each pixel's absolute deviation from its 4-neighbor mean is the noise proxy. It's not as precise as a Laplacian-of-Gaussian approach, but it's a single pass, requires no window allocation, and correlates well enough with perceived noise to drive the downstream parameters correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 3: Deriving every parameter
&lt;/h2&gt;

&lt;p&gt;This is where PACE diverges from conventional pipelines. Rather than exposing parameters to the user, it maps features to parameters through a set of monotonic functions with smooth clamping.&lt;/p&gt;

&lt;h3&gt;
  
  
  λ — nonlinear compression strength
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;contrastStrength&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="err"&gt;√&lt;/span&gt;&lt;span class="nx"&gt;variance&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;dynamicRange&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt;

&lt;span class="nx"&gt;noiseEnergy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;microContrast&lt;/span&gt;

&lt;span class="nx"&gt;λ_raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;contrastStrength&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;1.2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;noiseEnergy&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;textureIndex&lt;/span&gt;

&lt;span class="nx"&gt;λ&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;λ_raw&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;λ_raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="err"&gt;←&lt;/span&gt; &lt;span class="nx"&gt;smooth&lt;/span&gt; &lt;span class="nx"&gt;clamp&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;λ controls how aggressively the final delta is compressed via &lt;code&gt;Δ / (1 + λ|Δ|)&lt;/code&gt;. High noise → high λ → stronger compression → less noise amplification. High existing contrast → low λ → lighter touch. The &lt;code&gt;x/(1+x)&lt;/code&gt; clamp guarantees λ never reaches 1 and the denominator never explodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  β — highlight protection
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;highlightDominance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;highlightRatio&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;skewness&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;mean&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;dynamicRange&lt;/span&gt;

&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;highlightDominance&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;shadowRatio&lt;/span&gt;

&lt;span class="nx"&gt;β&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;β feeds into a luminance mask &lt;code&gt;max(0.15, 1 − β*L[i])&lt;/code&gt;. A bright, positively-skewed image gets high β, which pulls the mask down toward 0.15 in the highlights — protecting them from over-enhancement without a hard clip.&lt;/p&gt;

&lt;h3&gt;
  
  
  τ — tone limiter threshold
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;τ&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.35&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="err"&gt;√&lt;/span&gt;&lt;span class="nx"&gt;variance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;entropy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;τ&lt;/span&gt; &lt;span class="err"&gt;∈&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Low-contrast, low-entropy images (flat skies, underexposed shots) get high τ — the tone limiter allows more headroom because the image needs more work. High-contrast scenes get lower τ — the limiter kicks in earlier to prevent clipping.&lt;/p&gt;

&lt;h3&gt;
  
  
  globalAlpha and CLAHE parameters
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;contrastNeed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;entropy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;dynamicRange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;structureConfidence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edgeDensity&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;noiseRatio&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;imbalance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;shadowRatio&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;highlightRatio&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;

&lt;span class="nx"&gt;globalAlpha&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;f&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imbalance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;contrastNeed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;tileSize&lt;/span&gt; &lt;span class="err"&gt;∈&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nx"&gt;rounded&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;nearest&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
&lt;span class="nx"&gt;clipLimit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.02&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.08&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;structureConfidence&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Structured, noise-free images get smaller tiles (capturing local contrast at the right scale) and higher clip limits (allowing more redistribution). Noisy images get larger tiles and conservative clip limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 5: Spatial control maps
&lt;/h2&gt;

&lt;p&gt;Six maps are generated from a single gradient pass over L. The key ones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge map&lt;/strong&gt;: Alpha-Max + Beta-Min magnitude, same approximation as stage 2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure mask&lt;/strong&gt;: Euclidean gradient magnitude, normalized globally by its maximum. Used to boost enhancement in structurally confident regions via &lt;code&gt;structureMask^0.7&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skin damp map&lt;/strong&gt;: A Gaussian centered at L=0.5 with σ=0.18, producing values in [0.3, 1.0]. Mid-luminance pixels receive suppressed enhancement. This is a luminance heuristic, not color-based skin detection — it works because skin tones in OKLab tend to cluster near mid-L, and it naturally protects smooth gradients (faces, fabric) from over-sharpening.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local alpha map&lt;/strong&gt;: Computed tile-by-tile. Each tile measures gradient coherence (mean gradient / gradient std dev) weighted by noise, then modulates globalAlpha spatially. High-structure tiles get more enhancement; flat or noisy tiles get less. This map is then smoothed by a guided filter to prevent block boundaries from appearing in the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lsmall and Lmedium&lt;/strong&gt;: A 3×3 Gaussian-approximated smooth and a 5×5 box smooth of that result. These two scales provide the illumination estimates used in the Retinex computation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 6: The blending stack
&lt;/h2&gt;

&lt;p&gt;This is the core of PACE. Per pixel:&lt;/p&gt;

&lt;h3&gt;
  
  
  Three signals combined into one delta
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Retinex&lt;/span&gt;
&lt;span class="nx"&gt;reflectance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Lsmall&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Lmedium&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nx"&gt;detailMask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;clamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;reflectance&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;localMean&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;mean&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="nx"&gt;neighbors&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;Lsmall&lt;/span&gt;
&lt;span class="nx"&gt;textureMask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.015&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;deltaDetail&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;clamp&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;Lsmall&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;localMean&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;textureMask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt;&lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;deltaClahe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Lclahe&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;L&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;edgeAdaptive&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.03&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;delta&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;deltaClahe&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.45&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;deltaDetail&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;detailMask&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;skinDamp&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;structureBoost&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeAdaptive&lt;/span&gt;
&lt;span class="nx"&gt;delta&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;clamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Retinex term (&lt;code&gt;log(Lsmall) − log(Lmedium)&lt;/code&gt;) approximates the reflectance component of the image by treating Lmedium as the illumination estimate. A positive reflectance means the pixel is brighter than its local surround — a highlight or edge — and the detail mask opens up to let more Laplacian texture through.&lt;/p&gt;

&lt;p&gt;The Laplacian term (&lt;code&gt;Lsmall − localMean&lt;/code&gt;) is a band-pass detail signal. The &lt;code&gt;textureMask&lt;/code&gt; gates it by edge strength so it only amplifies where there's genuine structure, not flat regions or noise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three successive nonlinear compressions
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Halo suppression&lt;/span&gt;
&lt;span class="nx"&gt;deltaStable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;delta&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;delta&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;ε&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// Tone limiter (luminance-adaptive)&lt;/span&gt;
&lt;span class="nx"&gt;deltaLimited&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;deltaStable&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;deltaStable&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="sr"&gt;/ &lt;/span&gt;&lt;span class="se"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;τ * &lt;/span&gt;&lt;span class="se"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;0.5 + L&lt;/span&gt;&lt;span class="se"&gt;[&lt;/span&gt;&lt;span class="sr"&gt;i&lt;/span&gt;&lt;span class="se"&gt;]))&lt;/span&gt;&lt;span class="err"&gt;)
&lt;/span&gt;
&lt;span class="c1"&gt;// Soft nonlinear compression (Reinhard-style)&lt;/span&gt;
&lt;span class="nx"&gt;compressed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;deltaLimited&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;λ&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nx"&gt;deltaLimited&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each stage compresses large values more than small ones. Together they form a cascaded soft clipper that prevents any single large delta from blowing through — but doesn't hard-clip anything, so gradients remain smooth.&lt;/p&gt;

&lt;p&gt;The tone limiter has a luminance term &lt;code&gt;(0.5 + L[i])&lt;/code&gt; in the denominator. In highlights, &lt;code&gt;L[i]&lt;/code&gt; is large, so the divisor is large, so the limit is gentler — the signal is allowed to pass through more easily when there's already little headroom for error. In shadows, the divisor is small, so the limit is stricter — protecting shadow detail from noise amplification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final luminance
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;edgeResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;edge&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;kAdaptive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;edgeGain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;edgeResponse&lt;/span&gt;&lt;span class="o"&gt;^&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.6&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;lumMask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="nx"&gt;β&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;L&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nx"&gt;contrastGain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;finalAlpha&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="err"&gt;−&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;L&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="nx"&gt;enhanced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;L&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;compressed&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;edgeGain&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;lumMask&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;contrastGain&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;edgeGain&lt;/code&gt; is a soft edge boost that scales superlinearly with edge strength. &lt;code&gt;lumMask&lt;/code&gt; provides the highlight rolloff from β. &lt;code&gt;contrastGain&lt;/code&gt; provides a luminance-weighted global intensity that naturally lifts shadows more than highlights.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwuuzwd9jgu8idpu7bum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwuuzwd9jgu8idpu7bum.png" alt="Lunar Moon" width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7li6hjbyoleq0iouq0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7li6hjbyoleq0iouq0u.png" alt="satellite" width="800" height="235"&gt;&lt;/a&gt;&lt;br&gt;
A comparison across image categories (these use the live demo):&lt;br&gt;
👉 &lt;strong&gt;&lt;a href="https://muhammedshahid.github.io/pace/src/" rel="noopener noreferrer"&gt;Open Demo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Underexposed portraits&lt;/strong&gt;: shadows lift without skin posterization or highlight clipping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hazy landscapes&lt;/strong&gt;: contrast recovers without halo artifacts at sky/ground boundaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-noise low-light&lt;/strong&gt;: texture enhanced, noise suppressed rather than amplified&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Already well-exposed images&lt;/strong&gt;: minimal change — the pipeline reads the statistics and backs off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The live demo is available at the repository — you can drag in your own images and watch the per-stage progress in real time.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try it / contribute
&lt;/h2&gt;

&lt;p&gt;The full source implementations, is on GitHub:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://github.com/muhammedshahid/pace" rel="noopener noreferrer"&gt;github.com/muhammedshahid/pace&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline runs in a Web Worker, is framework-free, and exposes a single async function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;applyPACE&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pace-enhance&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enhanced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;applyPACE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imageData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;strength&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// default 1.0 = PACE auto&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;     &lt;span class="c1"&gt;// true = downloadable trace JSON per stage&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's also an &lt;code&gt;override&lt;/code&gt; option for researchers who want to fix specific parameters and experiment with the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enhanced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;applyPACE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imageData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;override&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;controlParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;clipLimit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.05&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;perceptualParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;tau&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Several directions I'm actively thinking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No-op path&lt;/strong&gt;: an early exit for images that statistically don't need enhancement, based on entropy + dynamic range thresholds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;True structure confidence&lt;/strong&gt;: the standalone &lt;code&gt;structureConfidence&lt;/code&gt; function in the repo is more sophisticated than what's currently wired up — it uses exponential edge decay and noise suppression rather than the raw ratio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boundary handling&lt;/strong&gt;: the current 1-pixel border exclusion in blending needs proper edge-extension&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perceptual evaluation&lt;/strong&gt;: SSIM and PSNR don't capture perceptual enhancement quality well. I want to build a feature-based evaluation that correlates with human preference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of this overlaps with your work, I'd genuinely love to hear from you — open an issue, or find me here in the comments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To ensure transparency and reproducibility, PACE is backed by DOI-archived records. The research is available on SSRN&lt;/strong&gt; &lt;em&gt;&lt;a href="https://dx.doi.org/10.2139/ssrn.6661421" rel="noopener noreferrer"&gt;(https://dx.doi.org/10.2139/ssrn.6661421)&lt;/a&gt;&lt;/em&gt;&lt;strong&gt;, the implementation is preserved on Zenodo&lt;/strong&gt; &lt;em&gt;&lt;a href="https://doi.org/10.5281/zenodo.19437397" rel="noopener noreferrer"&gt;(https://doi.org/10.5281/zenodo.19437397)&lt;/a&gt;&lt;/em&gt;&lt;strong&gt;, and the source code is openly available on GitHub&lt;/strong&gt; &lt;em&gt;&lt;a href="https://github.com/muhammedshahid/pace" rel="noopener noreferrer"&gt;(https://github.com/muhammedshahid/pace)&lt;/a&gt;&lt;/em&gt;&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live Demo:&lt;/strong&gt; &lt;em&gt;&lt;a href="https://muhammedshahid.github.io/pace/src/" rel="noopener noreferrer"&gt;https://muhammedshahid.github.io/pace/src/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the first in a series. Next up: the same pipeline explained for frontend developers — what it does without the math, how to drop it into a project, and when you'd actually want to use it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
