<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: byeval</title>
    <description>The latest articles on Forem by byeval (@byeval).</description>
    <link>https://forem.com/byeval</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/byeval"/>
    <language>en</language>
    <item>
      <title>How Face Blur Patches Stay Aligned During Export</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Thu, 23 Apr 2026 17:55:24 +0000</pubDate>
      <link>https://forem.com/byeval/how-face-blur-patches-stay-aligned-during-export-27ea</link>
      <guid>https://forem.com/byeval/how-face-blur-patches-stay-aligned-during-export-27ea</guid>
      <description>&lt;p&gt;Blurring a face is easy if you only care about a static demo.&lt;/p&gt;

&lt;p&gt;It gets more interesting when the user can redetect faces, expand padding, move patches, resize them, disable individual faces, change blur strength, and then export the final image without everything drifting out of alignment.&lt;/p&gt;

&lt;p&gt;The architecture that held up best for us was patch-based.&lt;/p&gt;

&lt;p&gt;The full companion guide is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  First build one blurred source image
&lt;/h2&gt;

&lt;p&gt;Instead of blurring each patch independently, the editor creates a blurred version of the entire source image first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`blur(&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;blurStrength&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;px)`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drawImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives the editor one reusable blur source for every detected face.&lt;/p&gt;

&lt;p&gt;The advantage is practical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;blur strength changes rebuild one source image&lt;/li&gt;
&lt;li&gt;existing patches can keep their geometry&lt;/li&gt;
&lt;li&gt;face interactions stay fast&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Every face is a cropped image patch
&lt;/h2&gt;

&lt;p&gt;Each detected face becomes a &lt;code&gt;FabricImage&lt;/code&gt; patch that points into the blurred source image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;patch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FabricImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blurredSourceElement&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cropX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cropY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the key design choice.&lt;/p&gt;

&lt;p&gt;The editor is not blurring arbitrary rectangles on demand. It is showing cropped windows into a precomputed blurred image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Geometry and crop source have to move together
&lt;/h2&gt;

&lt;p&gt;If the user drags or resizes a blur patch, the patch cannot only change its visible rectangle. The crop window inside the blurred source has to stay aligned too.&lt;/p&gt;

&lt;p&gt;That is why the implementation normalizes the patch back into real geometry and updates &lt;code&gt;cropX&lt;/code&gt; and &lt;code&gt;cropY&lt;/code&gt; alongside position and size:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cropX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cropY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;scaleX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;scaleY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That reset is important. It turns temporary drag/scale transforms back into stable source-image coordinates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Padding matters before editing starts
&lt;/h2&gt;

&lt;p&gt;Face detections are usually too tight on their own, so the code expands each detection by a configurable percentage before converting it into a patch.&lt;/p&gt;

&lt;p&gt;That gives users a better starting point and reduces the number of patches that need manual resizing just to cover the edges of a face properly.&lt;/p&gt;

&lt;p&gt;It is a small step, but it makes the blur tool feel much less fragile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Export should replay the current patch set
&lt;/h2&gt;

&lt;p&gt;When the user exports, the editor builds a fresh &lt;code&gt;StaticCanvas&lt;/code&gt; at original size, adds the untouched base image, then re-adds each visible blur patch with its current geometry and crop source.&lt;/p&gt;

&lt;p&gt;That means the saved file reflects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;current blur strength&lt;/li&gt;
&lt;li&gt;current patch positions&lt;/li&gt;
&lt;li&gt;current patch sizes&lt;/li&gt;
&lt;li&gt;current enabled or disabled state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing depends on the on-screen viewport.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why patch-based blur works
&lt;/h2&gt;

&lt;p&gt;This model stays understandable under real editing pressure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one blur source&lt;/li&gt;
&lt;li&gt;many editable crop windows&lt;/li&gt;
&lt;li&gt;normalized geometry after interaction&lt;/li&gt;
&lt;li&gt;export rebuilt from source pixels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is what keeps blur regions aligned even after several rounds of detection, adjustment, and export.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>canvas</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Auto-Detect Should Not Auto-Apply: Building Reviewable Redaction Overlays</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Thu, 23 Apr 2026 17:53:55 +0000</pubDate>
      <link>https://forem.com/byeval/auto-detect-should-not-auto-apply-building-reviewable-redaction-overlays-1p4p</link>
      <guid>https://forem.com/byeval/auto-detect-should-not-auto-apply-building-reviewable-redaction-overlays-1p4p</guid>
      <description>&lt;p&gt;The easiest way to make automatic redaction feel unsafe is to skip the review step.&lt;/p&gt;

&lt;p&gt;OCR, barcode detection, license-plate heuristics, and signature detection all make mistakes. If the product silently bakes those guesses into the exported image, users cannot tell whether the result is cautious, incomplete, or just wrong.&lt;/p&gt;

&lt;p&gt;The better architecture is to turn detections into normal editor objects first.&lt;/p&gt;

&lt;p&gt;The full companion guide is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Normalize detector output before it reaches the editor
&lt;/h2&gt;

&lt;p&gt;Different detectors can start from very different internals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OCR text blocks&lt;/li&gt;
&lt;li&gt;barcode APIs&lt;/li&gt;
&lt;li&gt;plate-specific filters&lt;/li&gt;
&lt;li&gt;connected-component image analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The editor should not need to know about any of that once a candidate region exists.&lt;/p&gt;

&lt;p&gt;The useful boundary is one normalized shape:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;left&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;top&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;width&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;height&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once every detector emits that region format, the editor can stay consistent even while the detection engines stay totally different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detection results should become editor objects
&lt;/h2&gt;

&lt;p&gt;In the markup editor, automatic suggestions are inserted as ordinary objects.&lt;/p&gt;

&lt;p&gt;Text detections can create redact rectangles. Blur and pixelation detections can create effect patches. The important part is that the result is editable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;moveable&lt;/li&gt;
&lt;li&gt;resizable&lt;/li&gt;
&lt;li&gt;deletable&lt;/li&gt;
&lt;li&gt;visible before export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a much better interaction model than "the detector already changed your image."&lt;/p&gt;

&lt;h2&gt;
  
  
  Tag what the detector owns
&lt;/h2&gt;

&lt;p&gt;The next implementation detail is the part that makes re-detect usable.&lt;/p&gt;

&lt;p&gt;Auto-generated objects are tagged with a source identifier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;redact&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;objectType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;shape&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;filled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;autoGenerated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;autoGenerated&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means the editor can distinguish OCR-generated regions from QR-generated regions and both from manual user edits.&lt;/p&gt;

&lt;p&gt;Without that tag, every new scan risks wiping out the user's manual cleanup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replace only the detector's previous suggestions
&lt;/h2&gt;

&lt;p&gt;Once objects carry a source tag, rerunning detection becomes much safer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;replaceAutoRedacts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;regions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RedactRegion&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;sourceTag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clearAutoGenerated&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sourceTag&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;regions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addRedact&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;autoGenerated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sourceTag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives you a clean behavior model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;detector reruns replace only their own old suggestions&lt;/li&gt;
&lt;li&gt;manual edits stay intact&lt;/li&gt;
&lt;li&gt;the user keeps control of the final reviewed state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is one of those small code decisions that has huge product consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Export still happens after review
&lt;/h2&gt;

&lt;p&gt;The overlay architecture also keeps the export path honest.&lt;/p&gt;

&lt;p&gt;Instead of baking detection results in immediately, the editor rebuilds the final image from the source plus the current object set. So the saved file always reflects the reviewed state, not the detector's first guess.&lt;/p&gt;

&lt;p&gt;That is exactly where privacy tooling should land. Detectors propose. Users decide. Export serializes the decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical lesson
&lt;/h2&gt;

&lt;p&gt;If an automatic detector can be wrong, it should create editable overlays rather than an irreversible export.&lt;/p&gt;

&lt;p&gt;That rule works for OCR, QR codes, signatures, license plates, and basically every other privacy-sensitive suggestion pipeline I have seen.&lt;/p&gt;

&lt;p&gt;It is also the point where an auto-detection feature stops feeling like a demo and starts feeling like a tool people can trust.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>privacy</category>
      <category>ux</category>
    </item>
    <item>
      <title>Stop Exporting The Viewport: How Zoomed Image Editors Map Back To Original Pixels</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Thu, 23 Apr 2026 17:52:12 +0000</pubDate>
      <link>https://forem.com/byeval/stop-exporting-the-viewport-how-zoomed-image-editors-map-back-to-original-pixels-3ko5</link>
      <guid>https://forem.com/byeval/stop-exporting-the-viewport-how-zoomed-image-editors-map-back-to-original-pixels-3ko5</guid>
      <description>&lt;p&gt;One of the easiest ways to break an image editor is to confuse the viewport with the image.&lt;/p&gt;

&lt;p&gt;The screen needs zooming, centering, and a comfortable interaction scale. The exported file needs exact source-pixel geometry. If those two layers get mixed together, the UI might look fine while the saved result is wrong.&lt;/p&gt;

&lt;p&gt;That boundary shows up in tools such as crop editors, screenshot redactors, and browser-side markup tools.&lt;/p&gt;

&lt;p&gt;The full companion guide is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-zoomed-image-editors-map-back-to-original-pixels" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-zoomed-image-editors-map-back-to-original-pixels&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The viewport is for comfort, not truth
&lt;/h2&gt;

&lt;p&gt;In the editor code, the Fabric canvas is sized to the visible container, then the viewport is transformed to fit the uploaded image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;clientWidth&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;clientHeight&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;offsetX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;clientWidth&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;offsetY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;clientHeight&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setZoom&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setViewportTransform&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;offsetX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;offsetY&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is exactly what the UI should do. Users should not be forced to edit an 1800-pixel image at a 1:1 screen scale.&lt;/p&gt;

&lt;p&gt;The important part is what this transform does not mean. It does not mean the viewport has become the source of truth for export.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep objects in image space
&lt;/h2&gt;

&lt;p&gt;The source image and editor objects are still inserted using the original image dimensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FabricImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;selectable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;evented&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means a crop box or redaction region is defined in source-image coordinates even while the visible canvas is zoomed and centered for editing.&lt;/p&gt;

&lt;p&gt;Once that boundary is in place, zooming becomes a viewing concern instead of a geometry problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resizing is where geometry bugs usually appear
&lt;/h2&gt;

&lt;p&gt;The subtle failure mode shows up during scaling handles.&lt;/p&gt;

&lt;p&gt;If you keep committing transformed width and height on every live scaling event, the numbers start compounding and the box drifts away from the correct geometry.&lt;/p&gt;

&lt;p&gt;The safer approach is to read the current temporary size from the base dimensions plus scale:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;moveableRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;moveableRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleX&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;moveableRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;moveableRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleY&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then normalize the box back into real width, height, left, and top values once the interaction should be committed.&lt;/p&gt;

&lt;p&gt;That sounds small, but it is the difference between a stable editor and a resize tool that gets weirder every time the user drags a corner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Export should rebuild from source pixels
&lt;/h2&gt;

&lt;p&gt;The other common mistake is exporting the visible viewport as if it were the artifact.&lt;/p&gt;

&lt;p&gt;For crop export, the editor creates a fresh canvas sized to the normalized crop region and draws directly from the original image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drawImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceImage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;nextRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;nextRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;height&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For full markup export, the same idea appears as a clean &lt;code&gt;StaticCanvas&lt;/code&gt; at the original image dimensions, with the base image re-added and the current overlay objects replayed onto it.&lt;/p&gt;

&lt;p&gt;That is what makes the saved result accurate even if the editor viewport was zoomed out, zoomed in, or centered differently when the user clicked export.&lt;/p&gt;

&lt;h2&gt;
  
  
  The useful mental model
&lt;/h2&gt;

&lt;p&gt;I think the cleanest rule is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;viewport transform is for viewing&lt;/li&gt;
&lt;li&gt;object geometry is for editing&lt;/li&gt;
&lt;li&gt;export canvas is for truth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those three concerns stay separate, the math stays understandable and the output stays aligned.&lt;/p&gt;

&lt;p&gt;If they get mixed together, you usually end up debugging "why is the export offset?" bugs that are really just architecture bugs.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-zoomed-image-editors-map-back-to-original-pixels" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-zoomed-image-editors-map-back-to-original-pixels&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>canvas</category>
      <category>privacy</category>
    </item>
    <item>
      <title>How To Auto-Detect QR Codes, Signatures, and License Plates In The Browser</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:51:26 +0000</pubDate>
      <link>https://forem.com/byeval/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser-2e58</link>
      <guid>https://forem.com/byeval/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser-2e58</guid>
      <description>&lt;p&gt;One of the easiest mistakes in privacy tooling is trying to solve every target type with one detector.&lt;/p&gt;

&lt;p&gt;QR codes, signatures, and license plates all end up as "regions to hide," but technically they are different problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;QR codes are machine-readable symbols&lt;/li&gt;
&lt;li&gt;license plates are structured text with strong visual constraints&lt;/li&gt;
&lt;li&gt;signatures are image shapes more than readable words&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you force all three through generic OCR, the output gets noisy fast.&lt;/p&gt;

&lt;p&gt;The companion guide for this piece is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Mixed detection works better than one universal pipeline
&lt;/h2&gt;

&lt;p&gt;From the product side, these features all look related. The user wants the tool to suggest privacy-sensitive regions automatically.&lt;/p&gt;

&lt;p&gt;From the engineering side, they need different signals.&lt;/p&gt;

&lt;p&gt;The more useful architecture is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multiple detector functions&lt;/li&gt;
&lt;li&gt;one normalized region format&lt;/li&gt;
&lt;li&gt;one editor surface&lt;/li&gt;
&lt;li&gt;one review step before export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That keeps the interaction model simple without pretending the detection problem is simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  QR codes and barcodes: use the browser when the browser already knows
&lt;/h2&gt;

&lt;p&gt;For QR codes and barcodes, the cleanest path is usually &lt;code&gt;BarcodeDetector&lt;/code&gt; when the browser supports it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;detector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BarcodeDetector&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;formats&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;qr_code&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;code_128&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ean_13&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pdf417&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;detector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives you native symbol detection plus bounding boxes you can pad into safer blur or redaction regions.&lt;/p&gt;

&lt;p&gt;The product lesson here is mostly about failure modes. If &lt;code&gt;BarcodeDetector&lt;/code&gt; is unavailable, the UI should say so explicitly. Silent failure is worse than no feature because it makes the user trust an absence of results that may be false.&lt;/p&gt;

&lt;h2&gt;
  
  
  License plates: OCR is useful, but only as a candidate generator
&lt;/h2&gt;

&lt;p&gt;License plates are text, but not ordinary text. A raw OCR pass usually gives too much junk unless you filter aggressively.&lt;/p&gt;

&lt;p&gt;The pattern we used is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;start from OCR blocks and lines&lt;/li&gt;
&lt;li&gt;normalize candidate text to uppercase alphanumeric characters&lt;/li&gt;
&lt;li&gt;require both letters and digits&lt;/li&gt;
&lt;li&gt;filter by plausible string length&lt;/li&gt;
&lt;li&gt;reject impossible aspect ratios&lt;/li&gt;
&lt;li&gt;ignore text in unlikely vertical positions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That turns OCR into a candidate generator instead of pretending it understands the full context of a vehicle image.&lt;/p&gt;

&lt;p&gt;This is often the right level of ambition for privacy tooling: narrow heuristics on top of a broad detector.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signatures: image analysis beats text recognition
&lt;/h2&gt;

&lt;p&gt;Signatures are the opposite case. OCR often performs poorly because handwriting is inconsistent and the goal is not to read the text anyway. The goal is to find the signed region.&lt;/p&gt;

&lt;p&gt;So the better signal was image analysis on a scaled canvas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getImageData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;threshold&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;estimateSignatureThreshold&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there, the detector walks connected dark components, measures each region, and filters by heuristics like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;width&lt;/li&gt;
&lt;li&gt;height&lt;/li&gt;
&lt;li&gt;fill ratio&lt;/li&gt;
&lt;li&gt;relative position on the page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then nearby candidates can be merged into one more useful region.&lt;/p&gt;

&lt;p&gt;This is not a universal signature model, and that is exactly the point. It is a practical heuristic for one narrow job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detection quality depends on what happens after detection
&lt;/h2&gt;

&lt;p&gt;Even when the detector logic is correct, the raw output is usually not ready for users.&lt;/p&gt;

&lt;p&gt;The post-processing layer matters a lot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add padding so the target is fully covered&lt;/li&gt;
&lt;li&gt;merge nearby fragments&lt;/li&gt;
&lt;li&gt;deduplicate overlapping results&lt;/li&gt;
&lt;li&gt;normalize everything into the same region shape the editor understands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you skip those steps, the result is usually a screen full of tiny, fragmented boxes that nobody trusts.&lt;/p&gt;

&lt;h2&gt;
  
  
  One review model for many detectors
&lt;/h2&gt;

&lt;p&gt;The biggest architecture win was not in the detectors themselves. It was in the shared output model.&lt;/p&gt;

&lt;p&gt;Every detector returns the same kind of region object, and every region is inserted into the same editor as a reviewable overlay.&lt;/p&gt;

&lt;p&gt;That gives the product a stable interaction model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;QR detection can suggest a region&lt;/li&gt;
&lt;li&gt;signature detection can suggest another&lt;/li&gt;
&lt;li&gt;plate detection can suggest blur regions&lt;/li&gt;
&lt;li&gt;the user still reviews all of them the same way&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is much easier to maintain than building a special-case UX for every detector type.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical lesson
&lt;/h2&gt;

&lt;p&gt;Privacy-sensitive detection gets better when you stop looking for one perfect detector and start using the right signal for each target.&lt;/p&gt;

&lt;p&gt;The useful stack is often not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one model&lt;/li&gt;
&lt;li&gt;one answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multiple detectors&lt;/li&gt;
&lt;li&gt;narrow heuristics&lt;/li&gt;
&lt;li&gt;normalized region output&lt;/li&gt;
&lt;li&gt;explicit human review before export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That combination is usually much more reliable than a single generalized pass.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-to-auto-detect-qr-codes-signatures-and-license-plates-in-the-browser&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>privacy</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Building Browser-First Image Redaction Without Uploading Files</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:50:54 +0000</pubDate>
      <link>https://forem.com/byeval/building-browser-first-image-redaction-without-uploading-files-4mpo</link>
      <guid>https://forem.com/byeval/building-browser-first-image-redaction-without-uploading-files-4mpo</guid>
      <description>&lt;p&gt;If a redaction tool starts by uploading a sensitive screenshot to a server, the product has already created a trust problem.&lt;/p&gt;

&lt;p&gt;That is why I think browser-first redaction is more than a frontend implementation choice. It is part of the product claim.&lt;/p&gt;

&lt;p&gt;The companion guide for this piece is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-browser-first-image-redaction-works-without-uploads" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-browser-first-image-redaction-works-without-uploads&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What "browser-first" should actually mean
&lt;/h2&gt;

&lt;p&gt;A lot of products say they run in the browser. That statement is too vague to be useful.&lt;/p&gt;

&lt;p&gt;For privacy-sensitive editing, the more meaningful boundary is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the original image stays local by default&lt;/li&gt;
&lt;li&gt;editing happens on the client&lt;/li&gt;
&lt;li&gt;export is rebuilt locally from the source image&lt;/li&gt;
&lt;li&gt;the final file is downloaded directly in the browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That does not make the implementation simpler. It just makes the privacy boundary explicit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The visible canvas is not the source of truth
&lt;/h2&gt;

&lt;p&gt;One of the first problems in a real editor is coordinate systems.&lt;/p&gt;

&lt;p&gt;Users need a comfortable viewport with zooming and panning. The exported redaction, however, has to map back to the original image dimensions.&lt;/p&gt;

&lt;p&gt;So the visible canvas should be treated as an interaction surface, not as the canonical image.&lt;/p&gt;

&lt;p&gt;The implementation pattern we use is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;keep the original image dimensions as the source of truth&lt;/li&gt;
&lt;li&gt;add the source image as the editor base layer&lt;/li&gt;
&lt;li&gt;fit the viewport to the available screen space&lt;/li&gt;
&lt;li&gt;keep overlays aligned to the original image coordinate system&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That split is what makes browser-side editing and accurate export compatible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overlays are better than destructive mutations
&lt;/h2&gt;

&lt;p&gt;The next decision was to treat edits as overlay objects rather than immediately mutating the bitmap every time the user interacts with the tool.&lt;/p&gt;

&lt;p&gt;That gives the editor a much better operating model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;redaction boxes can still be moved and resized&lt;/li&gt;
&lt;li&gt;blur and pixelation patches can react to strength changes&lt;/li&gt;
&lt;li&gt;auto-detected regions can be replaced without touching manual edits&lt;/li&gt;
&lt;li&gt;the user can review the exact objects that will affect the export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially important in privacy tools because auto-generated suggestions should never feel permanent before review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Export should rebuild the result, not capture the screen
&lt;/h2&gt;

&lt;p&gt;Many browser image tools get loose at export time. They save the current editor state too literally, or effectively take a screenshot of the viewport.&lt;/p&gt;

&lt;p&gt;That is not good enough for a privacy workflow.&lt;/p&gt;

&lt;p&gt;The more reliable pattern is to create a clean export canvas at the original image dimensions, add the source image again, then replay the overlays on top of it.&lt;/p&gt;

&lt;p&gt;In our case that starts with a fresh Fabric static canvas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;exportCanvas&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StaticCanvas&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;util&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCanvasElement&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceImageElement&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceImageElement&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then each visible overlay is cloned or reconstructed before generating the final file.&lt;/p&gt;

&lt;p&gt;That matters because the editor may contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manual redaction shapes&lt;/li&gt;
&lt;li&gt;blur or pixelation effect patches&lt;/li&gt;
&lt;li&gt;auto-detected regions&lt;/li&gt;
&lt;li&gt;text or annotation objects in adjacent tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Export should represent the intended result, not whatever happens to be visible on a scaled viewport at that instant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local download is part of the privacy boundary
&lt;/h2&gt;

&lt;p&gt;Once the export exists as a data URL or blob, the browser can download it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;anchor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;a&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;anchor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;href&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;dataUrl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;anchor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;download&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;anchor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That seems basic, but in a privacy product it matters. If the editing workflow is local and the export path is local, the product story is easier to understand and easier to trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard parts are around the edges
&lt;/h2&gt;

&lt;p&gt;Drawing a rectangle on a canvas is not the challenge.&lt;/p&gt;

&lt;p&gt;The real engineering work shows up in the boundaries around the editor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keeping original coordinates stable while the viewport zooms and pans&lt;/li&gt;
&lt;li&gt;making auto-detection additive instead of destructive&lt;/li&gt;
&lt;li&gt;rebuilding blur and pixelation patches accurately during export&lt;/li&gt;
&lt;li&gt;keeping the editor responsive with large images&lt;/li&gt;
&lt;li&gt;cleaning up canvas resources and workers on teardown&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are the concerns that determine whether the tool feels credible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser-first is a product decision
&lt;/h2&gt;

&lt;p&gt;The main lesson for me was that "runs in the browser" is not the interesting sentence.&lt;/p&gt;

&lt;p&gt;"Keeps sensitive editing local by default" is the interesting sentence.&lt;/p&gt;

&lt;p&gt;That is the real boundary users care about, and it should shape the implementation.&lt;/p&gt;

&lt;p&gt;If a product claims privacy-safe redaction, the architecture should reflect that claim:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;local source handling&lt;/li&gt;
&lt;li&gt;editable overlays&lt;/li&gt;
&lt;li&gt;explicit review&lt;/li&gt;
&lt;li&gt;local export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That will usually earn more trust than adding another server-side processing step and asking users not to worry about it.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-browser-first-image-redaction-works-without-uploads" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-browser-first-image-redaction-works-without-uploads&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>privacy</category>
      <category>frontend</category>
    </item>
    <item>
      <title>OCR Is Not Redaction: Building Safer Auto-Redaction With Tesseract.js</title>
      <dc:creator>byeval</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:49:57 +0000</pubDate>
      <link>https://forem.com/byeval/ocr-is-not-redaction-building-safer-auto-redaction-with-tesseractjs-1ipj</link>
      <guid>https://forem.com/byeval/ocr-is-not-redaction-building-safer-auto-redaction-with-tesseractjs-1ipj</guid>
      <description>&lt;p&gt;OCR demos usually stop too early.&lt;/p&gt;

&lt;p&gt;They show &lt;code&gt;recognize()&lt;/code&gt;, print some text, and imply that automatic redaction is basically done. In a real product, that is maybe 20 percent of the job.&lt;/p&gt;

&lt;p&gt;What users actually need is a safer pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run OCR on the image.&lt;/li&gt;
&lt;li&gt;Classify risky spans such as emails, phone numbers, account references, dates, and IDs.&lt;/li&gt;
&lt;li&gt;Map those matched spans back to OCR word boxes.&lt;/li&gt;
&lt;li&gt;Pad the boxes so the text edges are fully covered.&lt;/li&gt;
&lt;li&gt;Insert them as editable regions instead of exporting immediately.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is the pattern we use in a browser-first redaction flow built around Tesseract.js.&lt;/p&gt;

&lt;p&gt;The full companion guide is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-ocr-assisted-redaction-works-with-tesseract-js" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-ocr-assisted-redaction-works-with-tesseract-js&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we kept OCR in the browser
&lt;/h2&gt;

&lt;p&gt;Sensitive screenshots are exactly the wrong kind of asset to upload to a server by default just to detect an email address or account number.&lt;/p&gt;

&lt;p&gt;Running OCR in the browser gave us a cleaner privacy boundary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the image stays local by default&lt;/li&gt;
&lt;li&gt;the user can review the result immediately&lt;/li&gt;
&lt;li&gt;the OCR pass can feed directly into the editor without waiting on a round trip&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That still leaves the hardest part unsolved: turning OCR output into something safe enough to help with redaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Geometry matters more than text
&lt;/h2&gt;

&lt;p&gt;For redaction, plain OCR text is not enough. The editor needs coordinates.&lt;/p&gt;

&lt;p&gt;So instead of treating Tesseract.js as a text extractor, we ask it for structured layout data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recognize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;asset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ocrSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;rotateAuto&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;blocks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives us paragraphs, lines, and words with bounding boxes. Without those word-level bounds, there are no usable redaction candidates. There is only text.&lt;/p&gt;

&lt;p&gt;We also lazily create and reuse the worker instead of rebuilding it on every scan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;ocrWorkerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createWorker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;loadOcrWorkerFactory&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;ocrWorkerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createWorker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eng&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ocrWorkerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setParameters&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;tessedit_pageseg_mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;11&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;preserve_interword_spaces&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That keeps the editor responsive across repeated scans and makes the OCR step feel more like a tool and less like a blocking batch job.&lt;/p&gt;

&lt;h2&gt;
  
  
  The useful trick: match text, then map back to words
&lt;/h2&gt;

&lt;p&gt;The main implementation trick was simple and practical.&lt;/p&gt;

&lt;p&gt;For each OCR line, we rebuild a single line string, but we also keep the character offsets of every OCR word inside that string. That gives us a bridge between pattern matching and image geometry.&lt;/p&gt;

&lt;p&gt;So the flow becomes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reconstruct the OCR line as plain text.&lt;/li&gt;
&lt;li&gt;Run regexes for categories like email, phone, URL, date, or ID.&lt;/li&gt;
&lt;li&gt;Find which OCR words overlap each matched character range.&lt;/li&gt;
&lt;li&gt;Merge those word bounds into one redaction region.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That lets us keep the matching logic simple while still ending up with coordinates we can draw and edit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tight boxes are risky
&lt;/h2&gt;

&lt;p&gt;One thing that became obvious very quickly: exact glyph bounds look precise in demos, but they are risky in real privacy tooling.&lt;/p&gt;

&lt;p&gt;If the box is too tight, the export can still leak fragments of the text around the edges. So after merging the matched word boxes, we expand the region with padding before inserting it into the editor.&lt;/p&gt;

&lt;p&gt;That padding step ended up being one of the most important product decisions in the whole flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;too little padding leaves readable fragments&lt;/li&gt;
&lt;li&gt;too much padding hides useful surrounding context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So OCR quality alone is not the main issue. Region construction is just as important.&lt;/p&gt;

&lt;h2&gt;
  
  
  OCR should propose, not finalize
&lt;/h2&gt;

&lt;p&gt;This was the biggest product lesson.&lt;/p&gt;

&lt;p&gt;OCR-assisted redaction should not silently modify an image and export the result. It should insert reviewable regions into the editor and let the user confirm, delete, resize, or add more regions before saving.&lt;/p&gt;

&lt;p&gt;For privacy tools, review is not a fallback. It is part of the feature.&lt;/p&gt;

&lt;p&gt;That design also helped with the predictable OCR failure cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;low-contrast screenshots&lt;/li&gt;
&lt;li&gt;dense tables with tiny text&lt;/li&gt;
&lt;li&gt;mixed-language content&lt;/li&gt;
&lt;li&gt;broken OCR segmentation&lt;/li&gt;
&lt;li&gt;labels like &lt;code&gt;ID&lt;/code&gt; or &lt;code&gt;Total&lt;/code&gt; that match patterns but are not always sensitive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you accept that OCR is a candidate generator instead of a perfect decision-maker, the whole interaction model gets better.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real implementation boundary
&lt;/h2&gt;

&lt;p&gt;Tesseract.js is only the OCR engine. The hard part is the boundary around it.&lt;/p&gt;

&lt;p&gt;What actually made the feature useful was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keeping the scan client-side&lt;/li&gt;
&lt;li&gt;reusing the worker efficiently&lt;/li&gt;
&lt;li&gt;preserving stable geometry&lt;/li&gt;
&lt;li&gt;matching only the categories we cared about&lt;/li&gt;
&lt;li&gt;padding regions conservatively&lt;/li&gt;
&lt;li&gt;requiring review before export&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the difference between an OCR demo and a privacy tool.&lt;/p&gt;

&lt;p&gt;If you are building something similar, I would strongly recommend optimizing for reviewable suggestions instead of "one-click automatic redaction." The first approach ships. The second usually overpromises.&lt;/p&gt;

&lt;p&gt;More implementation details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://happyimg.com/guides/how-ocr-assisted-redaction-works-with-tesseract-js" rel="noopener noreferrer"&gt;https://happyimg.com/guides/how-ocr-assisted-redaction-works-with-tesseract-js&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>privacy</category>
      <category>ocr</category>
    </item>
  </channel>
</rss>
