<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aral Roca</title>
    <description>The latest articles on Forem by Aral Roca (@aralroca).</description>
    <link>https://forem.com/aralroca</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aralroca"/>
    <language>en</language>
    <item>
      <title>I Built a Free Invoice Generator, Resume Builder, and Cover Letter Generator That Don't Require Signup</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Sat, 02 May 2026 19:51:53 +0000</pubDate>
      <link>https://forem.com/aralroca/i-built-a-free-invoice-generator-resume-builder-and-cover-letter-generator-that-dont-require-5f81</link>
      <guid>https://forem.com/aralroca/i-built-a-free-invoice-generator-resume-builder-and-cover-letter-generator-that-dont-require-5f81</guid>
      <description>&lt;p&gt;A friend asked me to recommend a free invoice generator yesterday. I opened the first ten results on Google and they all did the same thing: signup wall, email capture, or a watermark across the PDF unless you pay $12/month. Generating a PDF from form fields is a solved problem that somehow became a SaaS category.&lt;/p&gt;

&lt;p&gt;So I built three of them in a day. Invoice generator, resume builder, cover letter generator. No signup, no watermark, no data leaving your browser. Fill in the fields, see a live preview, download a clean PDF. Done.&lt;/p&gt;

&lt;p&gt;The three tools are an &lt;a href="https://kitmul.com/en/finance/invoice-generator" rel="noopener noreferrer"&gt;Invoice Generator&lt;/a&gt;, a &lt;a href="https://kitmul.com/en/writing/resume-builder" rel="noopener noreferrer"&gt;Resume Builder&lt;/a&gt;, and a &lt;a href="https://kitmul.com/en/writing/cover-letter-generator" rel="noopener noreferrer"&gt;Cover Letter Generator&lt;/a&gt;. All three run entirely in your browser. No data leaves your device. No account needed. No watermarks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m5pfeq5iz1jydim445w.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m5pfeq5iz1jydim445w.webp" alt="Invoice Generator with live preview showing a completed invoice with templates" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built these instead of recommending existing ones
&lt;/h2&gt;

&lt;p&gt;I tried to find free invoice generators for a freelancer friend last month. Every single one either required signup, limited you to one template, added their branding to the PDF, or was clearly a funnel for their SaaS product. Same story with resume builders: the free tier gives you one basic template, and the moment you want something that looks professional, you're staring at a paywall.&lt;/p&gt;

&lt;p&gt;The thing is, generating a PDF from form data is not a hard problem. The browser has everything it needs. &lt;a href="https://pdf-lib.js.org/" rel="noopener noreferrer"&gt;pdf-lib&lt;/a&gt; is an excellent open-source library that creates PDFs entirely in JavaScript. There is genuinely no reason your invoice data needs to touch someone else's server.&lt;/p&gt;

&lt;p&gt;So I built these tools with a shared PDF rendering layer and a template system inspired by &lt;a href="https://kitmul.com/en/writing/ppt-presentation-maker" rel="noopener noreferrer"&gt;PptPresentationMaker&lt;/a&gt;, which was already the most complex tool on Kitmul. The architecture is straightforward: React state for the form, an HTML/CSS live preview that updates on every keystroke, and a pdf-lib PDF that generates when you click download.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Invoice Generator: six templates and real math
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpux1a9zn9kk24qjch7id.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpux1a9zn9kk24qjch7id.webp" alt="Resume Builder with Classic template and live preview" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/finance/invoice-generator" rel="noopener noreferrer"&gt;Invoice Generator&lt;/a&gt; was the most technically interesting of the three. It has six templates (Clean, Modern, Classic, Bold, Minimal, Corporate), each with distinct header layouts and color schemes. You can upload your company logo, and the tool resizes it client-side using a canvas element before embedding it in the PDF with &lt;code&gt;embedPng()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The auto-calculations handle subtotals, percentage or fixed-amount discounts, and tax rates. Everything updates live in the preview. The currency selector supports twelve currencies with locale-aware formatting via &lt;code&gt;Intl.NumberFormat&lt;/code&gt;; the same $1,234.56 displays correctly whether you pick USD, EUR, or JPY.&lt;/p&gt;

&lt;p&gt;The table rendering was the hardest part. pdf-lib doesn't have a table concept; you're positioning rectangles and text with pixel coordinates. I wrote a shared &lt;code&gt;drawTable&lt;/code&gt; helper that computes row heights based on text wrapping, handles alternating row backgrounds, and automatically breaks to a new page if the table overflows. That helper is now reusable across all three tools.&lt;/p&gt;

&lt;p&gt;One thing that surprised me: the header layout calculation. Templates with colored backgrounds need to know the exact height of the header content (logo height + business info lines + invoice details) before drawing the background rectangle. I ended up computing it dynamically based on which fields are filled in, so the header shrinks if you leave fields empty and expands if you add a logo.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Resume Builder: five templates, all ATS-friendly
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/writing/resume-builder" rel="noopener noreferrer"&gt;Resume Builder&lt;/a&gt; is the most complex tool at around 900 lines. It has five templates: Classic (single column, maximum ATS compatibility), Modern (with a colored sidebar for contact info and skills), Professional (two-column header), Minimal (lots of whitespace), and Executive (bold accent underlines).&lt;/p&gt;

&lt;p&gt;I made a deliberate choice to use only &lt;code&gt;StandardFonts&lt;/code&gt; from pdf-lib (Helvetica and Helvetica-Bold). Custom fonts look nicer, but they break &lt;a href="https://en.wikipedia.org/wiki/Applicant_tracking_system" rel="noopener noreferrer"&gt;Applicant Tracking Systems&lt;/a&gt;. ATS parsers expect standard fonts and simple text positioning. Every template outputs real, selectable text drawn top-to-bottom, never images of text. Even the Modern template with its sidebar draws the main content first in the reading order, so an ATS reads your experience before your contact details.&lt;/p&gt;

&lt;p&gt;Sections are reorderable. You can drag Experience above Education or add Certifications, Languages, and Projects sections. Each experience entry supports multiple bullet points with add/remove controls. The whole thing generates multi-page PDFs when your resume is longer than one page, with proper page breaks that never split a section header from its content.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cover Letter Generator: the simple one done right
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87o43g8jn3xwyf2lq1d7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87o43g8jn3xwyf2lq1d7.webp" alt="Cover Letter Generator with Traditional template and filled-in letter" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/writing/cover-letter-generator" rel="noopener noreferrer"&gt;Cover Letter Generator&lt;/a&gt; is the simplest of the three, but that's the point. A cover letter is a formatted business letter, and getting the formatting wrong makes you look sloppy. The tool handles four templates (Traditional, Modern, Professional, Simple) with proper business letter conventions: sender info placement, date formatting, greeting, body paragraphs, and sign-off.&lt;/p&gt;

&lt;p&gt;The Traditional template puts your contact info top-right (the formal standard). The Modern template uses a large name with a horizontal accent line. The Professional template has a colored header block. You can add as many body paragraphs as you need.&lt;/p&gt;

&lt;p&gt;No AI writes your letter. You write it, the tool formats it. I think that matters. A hiring manager who has read a hundred AI-generated cover letters can spot them instantly. Your words in a clean layout will stand out more than a GPT-generated letter in a fancy design.&lt;/p&gt;

&lt;h2&gt;
  
  
  The privacy argument is real
&lt;/h2&gt;

&lt;p&gt;These three tools handle sensitive information. Your invoice has your business details, client names, and financial data. Your resume has your employment history, email, phone number. Your cover letter names specific companies you're applying to.&lt;/p&gt;

&lt;p&gt;Every alternative I tested sends this data to a server. Zoho, Canva, Resume.io, Zety; they all require accounts, and once you've created an account, your data lives on their servers under their privacy policies. Some of them explicitly state they use your data for "service improvement," which is a polite way of saying they train models on your resume.&lt;/p&gt;

&lt;p&gt;With browser-based tools, the architecture makes privacy the default. There's no server to send data to. The &lt;a href="https://pdf-lib.js.org/" rel="noopener noreferrer"&gt;pdf-lib&lt;/a&gt; library generates the PDF in a Web Worker, the browser creates a Blob URL, and the download happens through a local anchor click. Your data exists in browser memory until you close the tab.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd build next
&lt;/h2&gt;

&lt;p&gt;Receipt Generator (the mirror of invoices, for the receiving side), NDA Generator (simple template-based legal documents), and Meeting Minutes Generator (structured notes to PDF). They all follow the same pattern: form data, live preview, clean PDF. The shared layout helpers make each new tool faster to build than the last.&lt;/p&gt;

&lt;p&gt;If you're a freelancer creating invoices, a job seeker polishing your resume, or anyone who needs a professional document without the signup dance, give these a try. They're at &lt;a href="https://kitmul.com" rel="noopener noreferrer"&gt;kitmul.com&lt;/a&gt; alongside 400+ other free browser-based tools.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Related tools: &lt;a href="https://kitmul.com/en/writing/ppt-presentation-maker" rel="noopener noreferrer"&gt;PPT Presentation Maker&lt;/a&gt; · &lt;a href="https://kitmul.com/en/pdf/text-to-pdf" rel="noopener noreferrer"&gt;Text to PDF&lt;/a&gt; · &lt;a href="https://kitmul.com/en/pdf/pdf-merger" rel="noopener noreferrer"&gt;PDF Merger&lt;/a&gt; · &lt;a href="https://kitmul.com/en/finance/budget-planner" rel="noopener noreferrer"&gt;Budget Planner&lt;/a&gt; · &lt;a href="https://kitmul.com/en/pdf/image-to-pdf" rel="noopener noreferrer"&gt;Image to PDF&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pdf-lib.js.org/" rel="noopener noreferrer"&gt;pdf-lib: Create and modify PDFs in JavaScript&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Applicant_tracking_system" rel="noopener noreferrer"&gt;Applicant Tracking System (Wikipedia)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat" rel="noopener noreferrer"&gt;Intl.NumberFormat (MDN)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owl.purdue.edu/owl/subject_specific_writing/professional_technical_writing/basic_business_letters/index.html" rel="noopener noreferrer"&gt;US business letter format (Purdue OWL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jobscan.co/blog/ats-resume/" rel="noopener noreferrer"&gt;How ATS parsers read resumes (Jobscan)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>javascript</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The O(n^2) Bug That Looked Like Clean Code</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Fri, 01 May 2026 20:30:09 +0000</pubDate>
      <link>https://forem.com/aralroca/the-on2-bug-that-looked-like-clean-code-3556</link>
      <guid>https://forem.com/aralroca/the-on2-bug-that-looked-like-clean-code-3556</guid>
      <description>&lt;p&gt;We shipped a feature on a Tuesday. By Thursday the API was timing out. The monitoring dashboard showed p99 latency climbing from 80ms to 14 seconds over 48 hours. Nothing had changed except the number of users.&lt;/p&gt;

&lt;p&gt;The culprit was three lines of code that looked completely reasonable during review:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;viewer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean. Readable. Functional style. Also O(n * m), which in our case was O(n^2) because users and permissions grew at the same rate. At 200 users, 40,000 comparisons. At 2,000 users, 4,000,000. At 20,000 users, 400,000,000. The fix was a one-liner; build a Map first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;permMap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;permMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;viewer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O(n + m) instead of O(n * m). Problem solved. But the real question is: why did nobody catch it?&lt;/p&gt;

&lt;h2&gt;
  
  
  The O(n^2) trap hides in plain sight
&lt;/h2&gt;

&lt;p&gt;Quadratic complexity is the most dangerous performance class in production software. Not because it's the slowest; O(2^n) and O(n!) are far worse. But because it &lt;em&gt;looks fine&lt;/em&gt;. It passes code review. It works in development. It even works in staging if your test dataset is small enough. Then it hits production with real data volumes and everything falls apart.&lt;/p&gt;

&lt;p&gt;The pattern is almost always the same: a lookup operation nested inside an iteration. &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find" rel="noopener noreferrer"&gt;Array.prototype.find()&lt;/a&gt;, &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes" rel="noopener noreferrer"&gt;Array.prototype.includes()&lt;/a&gt;, &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/indexOf" rel="noopener noreferrer"&gt;Array.prototype.indexOf()&lt;/a&gt;; these are all O(n) operations. Put them inside a loop and you've got O(n^2). Put them inside two nested loops and you've got O(n^3). JavaScript's expressive array methods make this especially easy to miss because the code reads like English instead of looking like nested &lt;code&gt;for&lt;/code&gt; loops.&lt;/p&gt;

&lt;p&gt;Here are five real patterns I've seen break production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: The innocent &lt;code&gt;.includes()&lt;/code&gt; inside &lt;code&gt;.filter()&lt;/code&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Looks clean, but O(n * m)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activeUsers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;allUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;activeIds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;activeIds&lt;/code&gt; is an array with 10,000 entries and &lt;code&gt;allUsers&lt;/code&gt; has 50,000 entries, you're doing 500,000,000 comparisons. Convert &lt;code&gt;activeIds&lt;/code&gt; to a &lt;code&gt;Set&lt;/code&gt; and it drops to 50,000:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activeSet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;activeIds&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activeUsers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;allUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;activeSet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set/has" rel="noopener noreferrer"&gt;Set.has()&lt;/a&gt; is O(1). That one change takes you from O(n * m) to O(n + m).&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Deduplication by comparison
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// O(n^2) deduplication&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;unique&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; 
  &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findIndex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I see this pattern weekly in code reviews. The &lt;code&gt;findIndex&lt;/code&gt; scans from the start for every element. For 10,000 items, that's up to 100 million comparisons. The fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;seen&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;unique&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;seen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;seen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or even simpler with a Map if you need the full objects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;unique&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])).&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;()];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pattern 3: The cascading &lt;code&gt;.map().filter().map()&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;This one is subtle. Each individual method is O(n), and chaining three O(n) operations is still O(n); 3n is just n with a constant factor. But it becomes O(n^2) when one of those operations hides a lookup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enriched&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;orders&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// O(n) inside O(n)&lt;/span&gt;
  &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;active&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;formatForDisplay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The functional pipeline looks elegant. The nested &lt;code&gt;.find()&lt;/code&gt; makes it quadratic. A pre-built lookup table fixes it with zero loss of readability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 4: The recursive tree flattener
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;flattenComments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replies&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nf"&gt;flattenComments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replies&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The spread operator inside reduce creates a new array on every recursive call. For a balanced tree with n nodes, this is O(n^2) due to array copying. The fix is &lt;code&gt;flat.push(...result)&lt;/code&gt; or passing the accumulator through:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;flattenComments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;comment&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replies&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;flattenComments&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replies&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pattern 5: The SQL query in a loop (the N+1 problem)
&lt;/h2&gt;

&lt;p&gt;This one isn't JavaScript-specific. It's the most common performance antipattern in web applications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;orders&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM orders WHERE status = ?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;active&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM order_items WHERE order_id = ?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;1 query for orders + n queries for items = n+1 queries total. With 1,000 orders, that's 1,001 database round trips. The &lt;a href="https://stackoverflow.com/questions/97197/what-is-the-n1-selects-problem-in-orm-object-relational-mapping" rel="noopener noreferrer"&gt;N+1 query problem&lt;/a&gt; is well-documented, but it keeps showing up because ORMs make it easy to trigger accidentally through lazy loading.&lt;/p&gt;

&lt;p&gt;The fix is a JOIN or a batch query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;orders&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM orders WHERE status = ?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;active&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;orderIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;o&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM order_items WHERE order_id IN (?)&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;orderIds&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g42m7a9p3irth9fs4mw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g42m7a9p3irth9fs4mw.webp" alt="A developer debugging performance issues on their screen; the hardest bugs to find are the ones that only appear at scale" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to catch quadratic complexity before production does
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Profile with realistic data sizes.&lt;/strong&gt; If your test suite runs with 10 records and production has 100,000, your tests are measuring nothing useful. Create benchmark fixtures that match production cardinality. The difference between O(n) and O(n^2) is invisible at n=10 and catastrophic at n=10,000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Grep for &lt;code&gt;.find()&lt;/code&gt;, &lt;code&gt;.includes()&lt;/code&gt;, &lt;code&gt;.indexOf()&lt;/code&gt; inside &lt;code&gt;.map()&lt;/code&gt;, &lt;code&gt;.filter()&lt;/code&gt;, &lt;code&gt;.reduce()&lt;/code&gt;.&lt;/strong&gt; This mechanical check catches the majority of accidental quadratic patterns in JavaScript codebases. If you want to see how dramatically different complexity classes scale, plug the numbers into a &lt;a href="https://kitmul.com/en/visualizers-logic/big-o-complexity-comparator" rel="noopener noreferrer"&gt;Big O Complexity Comparator&lt;/a&gt;; it plots every class from O(1) to O(n!) on the same chart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3bky7fzj7dgacaqoi4b.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3bky7fzj7dgacaqoi4b.webp" alt="All eight Big O complexity curves plotted on a logarithmic scale; the quadratic curve dominates linear at n=50, and exponential leaves the chart entirely" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Add complexity annotations to code reviews.&lt;/strong&gt; When reviewing code, ask: what is n? How big can it get? What's the complexity of the inner operation? Making this explicit prevents the "it works fine in dev" surprise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set latency budgets with alerts.&lt;/strong&gt; If an endpoint's p95 latency crosses 500ms, page someone. Quadratic complexity typically manifests as a gradual degradation, not a sudden failure; perfect for latency-based alerts that catch the trend before users notice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Know your standard library.&lt;/strong&gt; &lt;code&gt;Array.sort()&lt;/code&gt; is O(n log n). &lt;code&gt;Set.has()&lt;/code&gt; is O(1). &lt;code&gt;Array.includes()&lt;/code&gt; is O(n). &lt;code&gt;Map.get()&lt;/code&gt; is O(1). The difference between reaching for an Array method and a Map/Set method is often the difference between O(n^2) and O(n). The &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Keyed_collections" rel="noopener noreferrer"&gt;MDN Web Docs on Keyed Collections&lt;/a&gt; cover when to use each.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real lesson
&lt;/h2&gt;

&lt;p&gt;The dangerous thing about O(n^2) isn't that it's slow. It's that it's invisible until it isn't. Every one of the patterns above passed code review because the code was readable, idiomatic, and correct. It just didn't scale.&lt;/p&gt;

&lt;p&gt;Performance complexity isn't something you optimize for later. It's a design decision you make at the time you write the code. The question isn't "does this work?" but "does this work when n is 100x what I'm testing with?"&lt;/p&gt;

&lt;p&gt;Research from &lt;a href="https://web.dev/articles/vitals" rel="noopener noreferrer"&gt;Google&lt;/a&gt; consistently shows that every 100ms of added latency costs measurable business metrics. A quadratic algorithm that adds 50ms at your current scale and 5,000ms at next year's scale is a ticking time bomb.&lt;/p&gt;

&lt;p&gt;Build the habit of reaching for Map and Set by default. Question every &lt;code&gt;.find()&lt;/code&gt; and &lt;code&gt;.includes()&lt;/code&gt; inside a loop. Profile with production-sized data. The code that kills your app won't look dangerous; that's exactly what makes it dangerous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dive deeper
&lt;/h2&gt;

&lt;p&gt;For a complete reference on all eight complexity classes with interactive charts, runnable code, and interview patterns, read the &lt;a href="https://kitmul.com/en/visualizers-logic/big-o-complexity-comparator" rel="noopener noreferrer"&gt;Complete Guide to Big O Complexity&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>performance</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Shift from Determinism to Probabilism Is Bigger Than Analog to Digital</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Thu, 30 Apr 2026 19:23:37 +0000</pubDate>
      <link>https://forem.com/aralroca/the-shift-from-determinism-to-probabilism-is-bigger-than-analog-to-digital-lim</link>
      <guid>https://forem.com/aralroca/the-shift-from-determinism-to-probabilism-is-bigger-than-analog-to-digital-lim</guid>
      <description>&lt;p&gt;The other day I had a bug in production. Something was rendering wrong in the UI and I couldn't figure out where it was coming from. I gave Claude the URL, it opened Chrome, inspected the HTML in the browser, cross-referenced what it saw in the DOM with my source code, and found the exact line where the error was. Not the file; the line. It navigated between the rendered output and the codebase, matched what was broken on screen to the component that produced it, and pointed me to the fix.&lt;/p&gt;

&lt;p&gt;That moment stuck with me. Not because AI saved me time (it does that daily), but because of what it revealed about the nature of the answer. The model didn't &lt;em&gt;know&lt;/em&gt; where the bug was. It inferred it. It observed the rendered HTML, estimated which parts of the code could produce that output, and surfaced the most probable origin. That probabilistic inference across two different representations of the same system; browser and codebase; was more effective than my deterministic debugging would have been.&lt;/p&gt;

&lt;p&gt;We're living through something bigger than most people realize. The shift from deterministic to probabilistic computing isn't just a technical upgrade. It's a change in how knowledge gets created.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first paradigm shift: analog to digital
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5f4jepdiyt321w7t3bi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5f4jepdiyt321w7t3bi.webp" alt="Vintage audio equipment with VU meters and patch cables; the analog world where signals degraded with every copy" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The move from analog to digital was the defining technology transition of the late 20th century. It converted continuous signals into discrete data. Suddenly you could copy information without degradation. Transmit it globally. Store it efficiently. The internet, distributed systems, modern software; all of it descends from that single insight: continuous signals can be represented as sequences of ones and zeros.&lt;/p&gt;

&lt;p&gt;But there was something that transition left untouched: the process of creation itself.&lt;/p&gt;

&lt;p&gt;Digital software is deterministic. Given the same input, it produces the same output. Every line of code, every system, every product had to be explicitly designed, written, and maintained by a human. The computer executed instructions. It didn't generate anything it hadn't been told to generate. A &lt;a href="https://kitmul.com/en/developer/sql-formatter" rel="noopener noreferrer"&gt;SQL formatter&lt;/a&gt; formats SQL because someone wrote exact rules for how SQL should be formatted. A &lt;a href="https://kitmul.com/en/developer/password-generator" rel="noopener noreferrer"&gt;password generator&lt;/a&gt; produces random strings because someone implemented &lt;a href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator" rel="noopener noreferrer"&gt;CSPRNG&lt;/a&gt; algorithms that define precisely how randomness gets produced.&lt;/p&gt;

&lt;p&gt;Deterministic systems are predictable, testable, and reliable. They're also fundamentally limited: they can only do what someone has already imagined and coded.&lt;/p&gt;

&lt;h2&gt;
  
  
  The second paradigm shift: deterministic to probabilistic
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fn0d9851mb6y3d6f74a.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fn0d9851mb6y3d6f74a.webp" alt="Area chart showing global information generated from 1986 to 2025, split across three paradigms: analog (teal), deterministic digital (blue), and probabilistic/AI (amber). The analog-to-digital crossover happened in 2002; human-to-AI content crossover in 2024. Logarithmic scale from 10 EB to 100 ZB." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With large language models and deep learning, we entered a new phase. Systems that don't execute rigid instructions but generate results based on probability distributions.&lt;/p&gt;

&lt;p&gt;The difference is structural:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We no longer describe exactly &lt;em&gt;what&lt;/em&gt; to do. We train models to learn &lt;em&gt;how&lt;/em&gt; to do it.&lt;/li&gt;
&lt;li&gt;We don't generate information manually. We infer it.&lt;/li&gt;
&lt;li&gt;We produce answers, content, and decisions that were never explicitly programmed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think about what an &lt;a href="https://kitmul.com/en/ai/ai-content-detector" rel="noopener noreferrer"&gt;AI content detector&lt;/a&gt; does. It doesn't have a list of "AI-written sentences" to match against. It computes statistical properties of text; &lt;a href="https://en.wikipedia.org/wiki/Zipf%27s_law" rel="noopener noreferrer"&gt;Zipf's law&lt;/a&gt; conformity, punctuation entropy, sentence length distributions; and estimates a probability that the text was machine-generated. The detector itself is a probabilistic system analyzing the output of another probabilistic system. That's a sentence that would have been meaningless ten years ago.&lt;/p&gt;

&lt;p&gt;Or consider &lt;a href="https://kitmul.com/en/ai/automatic-subtitle-generator" rel="noopener noreferrer"&gt;automatic subtitle generation&lt;/a&gt;. &lt;a href="https://openai.com/index/whisper/" rel="noopener noreferrer"&gt;OpenAI's Whisper model&lt;/a&gt; doesn't follow if-then rules to transcribe speech. It processes audio spectrograms and predicts the most probable sequence of tokens that corresponds to what was said. It gets it right most of the time. Not all of the time. That "most of the time" is the defining characteristic of probabilistic systems.&lt;/p&gt;

&lt;p&gt;This shift has a direct impact on the most valuable resource that exists: time. AI reduces the effort required to create, analyze, and predict by orders of magnitude.&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge generation without precedent
&lt;/h2&gt;

&lt;p&gt;The key difference is that probabilistic systems can work with the unknown. From learned patterns, they can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate text, images, or code that has never existed before.&lt;/li&gt;
&lt;li&gt;Predict future outcomes from incomplete data.&lt;/li&gt;
&lt;li&gt;Find relationships that were never explicitly defined.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This breaks a historical constraint: we no longer need to write every possible case. The system can generalize.&lt;/p&gt;

&lt;p&gt;Consider the &lt;a href="https://kitmul.com/en/agile-project-management/monte-carlo-forecaster" rel="noopener noreferrer"&gt;Monte Carlo forecaster&lt;/a&gt;. Classical project management asked teams to estimate how long tasks would take and then added the numbers up. Monte Carlo simulation does something smarter: it runs thousands of scenarios using historical data and gives you a probability distribution of delivery dates. "There's an 85% chance you'll finish by March 15" is more useful than "the estimate is March 10." But here's a nuance that matters: Monte Carlo is deterministic code. Statistical formulas executed with perfect precision. There's no inference; there's simulation. It's probabilistic thinking implemented on deterministic infrastructure. Today an LLM could make the same prediction without any of that code; you pass it the team's historical data and it gives you a reasonable estimate. But "reasonable" isn't "reliable." Until models reach 99.99% accuracy, hand-coded statistical simulations remain the safe bet. Monte Carlo is exactly the kind of tool that marks the transition: probabilistic thinking that still needs deterministic crutches.&lt;/p&gt;

&lt;p&gt;The same principle applies everywhere. A &lt;a href="https://kitmul.com/en/ai/background-remover" rel="noopener noreferrer"&gt;background remover&lt;/a&gt; running a neural network in the browser doesn't have rules about what counts as "background." It has learned probability distributions over millions of segmented images and applies those distributions to your photo. A &lt;a href="https://kitmul.com/en/ai/prompt-generator" rel="noopener noreferrer"&gt;prompt generator&lt;/a&gt; doesn't store pre-written prompts; it structures natural language patterns that probabilistically produce better model outputs.&lt;/p&gt;

&lt;p&gt;Even tools that seem purely deterministic are being reshaped. &lt;a href="https://kitmul.com/en/writing/html-to-markdown" rel="noopener noreferrer"&gt;HTML to Markdown conversion&lt;/a&gt; is deterministic; the same HTML always produces the same Markdown. But the &lt;em&gt;reason&lt;/em&gt; that tool exists is probabilistic: people need clean Markdown because feeding raw HTML to an LLM &lt;a href="https://kitmul.com/en/blog/html-to-markdown-llm-tokens" rel="noopener noreferrer"&gt;wastes 60-80% of tokens on structural noise&lt;/a&gt;. A deterministic tool serving a probabilistic ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The current limitations: why it's not perfect yet
&lt;/h2&gt;

&lt;p&gt;Despite the potential, current AI has real constraints:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inference time.&lt;/strong&gt; Generating responses means processing enormous quantities of tokens. A complex reasoning chain in a frontier model can take 30-60 seconds. That's fast compared to human analysis, but slow compared to a database query. The latency gap between "compute a hash" (nanoseconds) and "reason about a bug" (seconds) is six orders of magnitude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Probabilistic errors.&lt;/strong&gt; Models don't "know" in the classical sense. They estimate. As of April 2026, &lt;a href="https://openai.com/index/introducing-gpt-5-5/" rel="noopener noreferrer"&gt;GPT-5.5&lt;/a&gt;, &lt;a href="https://www.anthropic.com/news/claude-opus-4-7" rel="noopener noreferrer"&gt;Claude Opus 4.7&lt;/a&gt;, and &lt;a href="https://deepmind.google/models/gemini/" rel="noopener noreferrer"&gt;Gemini 3.1 Pro&lt;/a&gt; score between 89% and 92% on &lt;a href="https://www.vals.ai/benchmarks/mmlu_pro" rel="noopener noreferrer"&gt;MMLU-Pro&lt;/a&gt;; the harder benchmark replacing the original MMLU. Each generation climbs a few points, but the numbers are still statistical. A &lt;a href="https://kitmul.com/en/visualizers-logic/graph-traversal-animator" rel="noopener noreferrer"&gt;graph traversal animator&lt;/a&gt; will always find the shortest path because BFS is deterministic. An LLM asked to find the shortest path will &lt;em&gt;probably&lt;/em&gt; find it, but it might hallucinate an edge that doesn't exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classical infrastructure.&lt;/strong&gt; These models run on hardware designed for deterministic computation: CPUs, GPUs, TPUs. &lt;a href="https://www.nvidia.com/en-us/data-center/h100/" rel="noopener noreferrer"&gt;NVIDIA's H100&lt;/a&gt; is optimized for parallel matrix multiplication, which is what transformers need, but the underlying architecture is still classical. We're solving probabilistic problems with deterministic machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trajectory: approaching 100% accuracy
&lt;/h2&gt;

&lt;p&gt;The trend line is clear. Each new model generation improves on benchmarks, reduces error rates, and expands generalization capacity. &lt;a href="https://deepmind.google/technologies/gemini/" rel="noopener noreferrer"&gt;Google's Gemini&lt;/a&gt;, &lt;a href="https://www.anthropic.com/claude" rel="noopener noreferrer"&gt;Anthropic's Claude&lt;/a&gt;, and &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI's GPT&lt;/a&gt; families are converging toward accuracy levels that make the distinction between "correct" and "highly probable" practically meaningless for many tasks.&lt;/p&gt;

&lt;p&gt;When models reach 99.99% accuracy on routine cognitive tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trust in AI systems will match or exceed trust in human judgment.&lt;/li&gt;
&lt;li&gt;Most intellectual tasks that follow learnable patterns will be delegated entirely.&lt;/li&gt;
&lt;li&gt;The marginal cost of generating knowledge will approach zero.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're not there yet. But the distance is shrinking with every model release.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottleneck: classical compute vs. quantum compute
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr8sg00fzausagkv29yi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr8sg00fzausagkv29yi.webp" alt="IBM Quantum System One installed at the Fraunhofer Institute; the world's first circuit-based commercial quantum computer, inside its airtight glass enclosure" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's an idea worth sitting with: we're solving a fundamentally probabilistic problem using deterministic tools.&lt;/p&gt;

&lt;p&gt;GPUs and TPUs parallelize massive calculations, but they operate under classical principles. This creates real constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High energy consumption. Training GPT-5 class models required &lt;a href="https://www.cometapi.com/how-many-gpus-to-train-gpt-5/" rel="noopener noreferrer"&gt;tens of thousands of NVIDIA H100 GPUs&lt;/a&gt; for months, at costs exceeding $100 million.&lt;/li&gt;
&lt;li&gt;Expensive scaling. More parameters means more hardware, more cooling, more electricity.&lt;/li&gt;
&lt;li&gt;Significant latency on large models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The theoretical alternative is &lt;a href="https://en.wikipedia.org/wiki/Quantum_computing" rel="noopener noreferrer"&gt;quantum computing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Companies like &lt;a href="https://www.ibm.com/quantum" rel="noopener noreferrer"&gt;IBM&lt;/a&gt;, &lt;a href="https://quantumai.google/" rel="noopener noreferrer"&gt;Google&lt;/a&gt;, and &lt;a href="https://www.dwavesys.com/" rel="noopener noreferrer"&gt;D-Wave Systems&lt;/a&gt; are exploring QPUs (Quantum Processing Units) that work directly with probabilistic states through &lt;a href="https://en.wikipedia.org/wiki/Quantum_superposition" rel="noopener noreferrer"&gt;superposition&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Quantum_entanglement" rel="noopener noreferrer"&gt;entanglement&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In theory, this would allow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solving certain calculations exponentially faster.&lt;/li&gt;
&lt;li&gt;Modeling probabilistic systems natively instead of simulating them on deterministic hardware.&lt;/li&gt;
&lt;li&gt;Drastically reducing the computational cost of AI inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to see what quantum circuits actually look like, the &lt;a href="https://kitmul.com/en/visualizers-logic/quantum-circuit-simulator" rel="noopener noreferrer"&gt;Quantum Circuit Simulator&lt;/a&gt; lets you build and run circuits in the browser. Two lines of code create a &lt;a href="https://en.wikipedia.org/wiki/Bell_state" rel="noopener noreferrer"&gt;Bell State&lt;/a&gt;; a maximally entangled pair of qubits where measurement of one instantly determines the other. That kind of native probabilistic behavior is exactly what current AI infrastructure lacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard problem: quantum error correction
&lt;/h2&gt;

&lt;p&gt;Quantum computing isn't ready for this role yet. The main obstacle is &lt;a href="https://en.wikipedia.org/wiki/Quantum_error_correction" rel="noopener noreferrer"&gt;Quantum Error Correction&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Quantum systems are extremely sensitive to noise and interference. Every interaction with the environment can collapse a qubit's superposition, corrupting the computation. Current quantum processors have error rates that make them impractical for the sustained, reliable computation that AI inference requires.&lt;/p&gt;

&lt;p&gt;For a QPU to be viable at scale, three things need to happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Error rates must drop dramatically.&lt;/strong&gt; Current physical qubits have error rates around 0.1-1%. Useful quantum computation needs rates below 0.0001%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stable qubit counts must scale.&lt;/strong&gt; IBM's current roadmap targets &lt;a href="https://research.ibm.com/blog/ibm-quantum-roadmap-2025" rel="noopener noreferrer"&gt;100,000 qubits by 2033&lt;/a&gt;. That's ambitious, but the engineering challenges at each step are enormous.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fault-tolerant architectures must mature.&lt;/strong&gt; &lt;a href="https://en.wikipedia.org/wiki/Toric_code" rel="noopener noreferrer"&gt;Surface codes&lt;/a&gt; and other error correction schemes work in principle but require thousands of physical qubits per logical qubit. The overhead is still prohibitive.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What's at stake isn't just speed. It's energy.&lt;/p&gt;

&lt;p&gt;Data centers consumed &lt;a href="https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai" rel="noopener noreferrer"&gt;around 415 TWh in 2024&lt;/a&gt;; 1.5% of global electricity. The IEA estimates they'll exceed &lt;a href="https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works" rel="noopener noreferrer"&gt;1,000 TWh by 2026&lt;/a&gt;, with AI as the main growth driver. Training a frontier model consumes the electricity equivalent of thousands of households for a year. Every inference query burns &lt;a href="https://www.bcs.org/articles-opinion-and-research/is-ai-or-quantum-computing-more-energy-intensive/" rel="noopener noreferrer"&gt;over 33 Wh on long prompts&lt;/a&gt;; ten times what a Google search uses. And this scales. More models, more agents, more robotics with embedded AI; every layer adds energy demand.&lt;/p&gt;

&lt;p&gt;The day quantum error correction gets solved, that equation changes radically. Current QPUs consume &lt;a href="https://patentpc.com/blog/quantum-computing-energy-consumption-how-sustainable-is-it-latest-data" rel="noopener noreferrer"&gt;around 25 kW&lt;/a&gt;, most of it on cryogenic cooling; not computation. But quantum computing works with the probabilistic problem natively instead of simulating it with trillions of matrix multiplications. &lt;a href="https://www.arquimea.com/blog/how-will-quantum-technologies-improve-ai-power-consumption/" rel="noopener noreferrer"&gt;Quantum compression algorithms already demonstrate 84% energy efficiency gains&lt;/a&gt; on specific AI tasks. And partial error correction is enabling &lt;a href="https://phys.org/news/2025-12-quantum-machine-nears-partial-error.html" rel="noopener noreferrer"&gt;quantum models to maintain high accuracy with thousands of qubits instead of millions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When quantum error correction matures, we won't just have faster AI. We'll have AI that consumes orders of magnitude less energy per inference. That's what turns quantum computing from a lab curiosity into viable infrastructure for the billions of agents the future requires.&lt;/p&gt;

&lt;p&gt;Until that happens, we'll continue running probabilistic AI on deterministic hardware. Which, honestly, works remarkably well for how fundamentally mismatched the paradigms are.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means practically
&lt;/h2&gt;

&lt;p&gt;This isn't abstract philosophy. The shift from determinism to probabilism changes how you build, how you work, and how you think about tools.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://kitmul.com/en/visualizers-logic/binary-search-tree-lab" rel="noopener noreferrer"&gt;binary search tree lab&lt;/a&gt; teaches deterministic algorithms. Insert a node, traverse the tree, get the same result every time. That kind of certainty is still valuable. Databases still need B-trees. Routing still needs Dijkstra. &lt;a href="https://kitmul.com/en/random/blue-noise-generator" rel="noopener noreferrer"&gt;Blue noise generators&lt;/a&gt; still need deterministic sampling algorithms to produce well-distributed random points.&lt;/p&gt;

&lt;p&gt;But the layer above those deterministic primitives is increasingly probabilistic. The database query is deterministic; the AI agent that decides &lt;em&gt;which&lt;/em&gt; query to run is probabilistic. The algorithm is deterministic; the model that selects &lt;em&gt;which&lt;/em&gt; algorithm fits the problem is probabilistic. The text is deterministic once written; the system that &lt;a href="https://kitmul.com/en/writing/image-to-text" rel="noopener noreferrer"&gt;extracts text from images&lt;/a&gt; using OCR neural networks is probabilistic.&lt;/p&gt;

&lt;p&gt;We're building a stack where deterministic systems execute and probabilistic systems decide. That's new. And it's only going to accelerate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The next step isn't better AI; it's a composable compute stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71yw0igtngwm60f8str5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71yw0igtngwm60f8str5.webp" alt="Physics and mathematics equations on a black background; the formal layer that unifies deterministic, probabilistic, and quantum computation" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What comes next doesn't replace one paradigm with another. It composes them.&lt;/p&gt;

&lt;p&gt;Think about how modern infrastructure already works. CPUs execute deterministic logic; database transactions, cryptographic verification, your operating system's kernel. GPUs and TPUs solve probabilistic problems; they train models, run inference, process distributions across millions of parameters. Each layer does what the other can't. Nobody suggests replacing CPUs with GPUs. You combine them.&lt;/p&gt;

&lt;p&gt;QPUs complete the third layer. They solve a class of problems that classical hardware simulates poorly: combinatorial optimization, high-dimensional distribution sampling, search across exponential spaces. AI inference doesn't just get faster; it becomes viable at scales that are currently intractable.&lt;/p&gt;

&lt;p&gt;The stack looks like this: deterministic systems execute and verify. Probabilistic systems propose and generate. Quantum systems optimize what neither of the other two can touch.&lt;/p&gt;

&lt;p&gt;But there's a piece almost nobody is talking about yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models talking to machines talking to models
&lt;/h2&gt;

&lt;p&gt;So far we think of AI as something that receives a prompt and returns text. That's like thinking the internet is email. The next layer is communication between probabilistic models; and between those models and the deterministic hardware they control.&lt;/p&gt;

&lt;p&gt;A language model analyzes sensor data and decides it needs more information from a specific area. It communicates that decision to a vision model controlling a drone. The drone moves, collects data, processes it through another specialized model, and returns the results to the first one. No human intervened in the cycle. No human decided that area was interesting. The system inferred it.&lt;/p&gt;

&lt;p&gt;That's not automation. Automation executes what a human designed. This is different: probabilistic systems deciding what data to collect, how to collect it, and what to do with what they find. Robots with AI models that choose to explore information sources that wouldn't have occurred to us.&lt;/p&gt;

&lt;p&gt;Think about a marine drone with chemical sensors and a model trained on oceanic biodiversity. It doesn't follow a preprogrammed route. It detects an anomaly in water composition, infers it could indicate an unknown microbial community, adjusts its trajectory, and takes samples. It finds something no marine biologist would have looked for because none would have predicted it would be there. Another model analyzes the samples, identifies compounds with pharmaceutical potential, and asks the drone to return to the same zone with different sensors.&lt;/p&gt;

&lt;p&gt;That's genuinely new knowledge. Not extracted from existing data or inferred from human text. Generated by a system that decided to go look for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  When prediction takes nanoseconds
&lt;/h2&gt;

&lt;p&gt;The current limitations are real. Inference takes seconds. Accuracy hovers around 86-95% depending on the benchmark. But those are the limitations of the first generation of a paradigm that's barely getting started. The trajectory points toward models with 99.99% accuracy and nanosecond response times.&lt;/p&gt;

&lt;p&gt;When that happens, the world reorganizes in ways that are hard to imagine from where we stand now.&lt;/p&gt;

&lt;p&gt;A self-driving car that takes 30 milliseconds to decide is a car that brakes late. One that decides in nanoseconds reacts before the obstacle finishes appearing. A network of medical models with 99.99% accuracy doesn't assist the doctor; it diagnoses with a reliability no human can match. A supply chain where every node has a predictive model communicating with all the others doesn't need quarterly planning; it reoptimizes in real time, every millisecond.&lt;/p&gt;

&lt;p&gt;But the really important thing is what happens when probabilistic inference becomes so fast and accurate that it's functionally indistinguishable from deterministic certainty. The distinction between "computing" and "inferring" disappears. Your operating system doesn't need to distinguish between an arithmetic operation and a prediction. The compiler doesn't need to know if the result comes from formal logic or a 400B parameter model. It's all computation; part rule-based, part distribution-based, part quantum-optimized. Seamlessly integrated.&lt;/p&gt;

&lt;p&gt;AI stops being a tool you open and becomes an infrastructure layer you don't even notice. Like electricity. Like TCP/IP. It's everywhere and you don't think about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge beyond the human
&lt;/h2&gt;

&lt;p&gt;Robots with AI models don't just automate what we used to do. They perceive what we can't.&lt;/p&gt;

&lt;p&gt;An ultrasonic sensor coupled with a materials model detects microfractures in a wind turbine that no visual inspection would find. A portable spectrometer with a chemical model identifies contaminants at concentrations human protocols don't measure. A hydrophone array with an acoustic model classifies marine species by sound patterns no biologist has cataloged yet.&lt;/p&gt;

&lt;p&gt;These aren't incremental efficiency improvements. They're new sources of knowledge. Data that existed in the physical world but was invisible to us because we didn't have the right sensors coupled with the right intelligence to interpret them.&lt;/p&gt;

&lt;p&gt;And here's where the loop closes. Those robots don't just collect data; the models inside them decide what data is worth collecting. But the cycle goes further than that. An AI analyzing ocean temperature anomalies determines that the existing sensor grid is too coarse; it needs readings at depths and frequencies no current instrument covers. So it designs a new sensor specification. A manufacturing robot builds it. A deployment robot installs it on a fleet of underwater drones. Those drones collect data that no previous system could capture, and that data trains a better model, which identifies the next gap in sensing capability, and the cycle begins again.&lt;/p&gt;

&lt;p&gt;No human decided what to measure. No human designed the sensor. No human chose where to deploy it. The entire chain; from identifying a knowledge gap to filling it with new physical hardware; was driven by probabilistic inference.&lt;/p&gt;

&lt;p&gt;When millions of such systems operate simultaneously, each one designing sensors for the others, deploying instruments that didn't exist a cycle ago, and feeding the results back into shared models, humanity gains access to a layer of reality it literally couldn't perceive before. It's not an improvement on what we already knew. It's access to what we didn't know we didn't know.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full stack
&lt;/h2&gt;

&lt;p&gt;Thirty years ago the question was "can you code?". Ten years ago it was "can you use APIs?". Today it's "can you direct models?". Tomorrow it'll be irrelevant; models will direct each other.&lt;/p&gt;

&lt;p&gt;Deterministic systems execute and verify. Probabilistic systems propose, generate, and decide. Quantum systems optimize the intractable. Sensors and robots extend all of this into the physical world. And communication between models closes the loop: systems that discover what they don't know, decide how to find out, and act to get it. Without human intervention. Without anyone telling them where to look.&lt;/p&gt;

&lt;p&gt;If the analog-to-digital shift redefined how we store information, and the deterministic-to-probabilistic shift is redefining how we generate knowledge, the full integration; models, hardware, sensors, robots, quantum computing; redefines the limits of what's possible to perceive and understand.&lt;/p&gt;

&lt;p&gt;We won't be using better tools. We'll be surrounded by an intelligence that sees what we don't see, seeks what we don't seek, and finds what we didn't know existed.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Hid a Secret Message in a Cat Photo and Nobody Noticed for Six Months</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Wed, 29 Apr 2026 19:07:14 +0000</pubDate>
      <link>https://forem.com/aralroca/i-hid-a-secret-message-in-a-cat-photo-and-nobody-noticed-for-six-months-4a47</link>
      <guid>https://forem.com/aralroca/i-hid-a-secret-message-in-a-cat-photo-and-nobody-noticed-for-six-months-4a47</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Note: dev.to modifies the image, the original image with the secret is the original post 🤘&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I found a steganography challenge in a CTF last year that had me staring at a picture of a cat for two hours. The image looked completely normal; 800x600 pixels of an orange tabby sitting on a keyboard. No metadata anomalies, no appended ZIP files, no obvious artifacts. The flag was hiding in the least significant bits of the blue channel. Once I extracted it, the message was 43 characters long. The cat hadn't changed at all.&lt;/p&gt;

&lt;p&gt;That experience sent me down a rabbit hole. Steganography is one of those topics that sounds like movie-hacker fiction until you actually try it. Then you realize it's just math; and surprisingly simple math at that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What steganography actually is
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Steganography" rel="noopener noreferrer"&gt;Steganography&lt;/a&gt; is the practice of hiding data inside other data so that nobody knows the hidden data exists. This is fundamentally different from cryptography. Encryption scrambles a message so it can't be read; steganography hides the message so nobody even looks for it.&lt;/p&gt;

&lt;p&gt;The distinction matters. An encrypted file screams "I have secrets." A stego image says nothing. It's a photo of a cat. Or a sunset. Or your company logo. The best steganography produces carrier files that are statistically indistinguishable from unmodified originals.&lt;/p&gt;

&lt;p&gt;The word itself comes from Greek: steganos (covered) + graphein (writing). It's been around since ancient Greece; Herodotus wrote about a guy who shaved a slave's head, tattooed a message on the scalp, waited for the hair to grow back, and sent the slave to deliver the message. We've upgraded to pixels since then.&lt;/p&gt;

&lt;h2&gt;
  
  
  How LSB encoding works
&lt;/h2&gt;

&lt;p&gt;The most common image steganography technique is &lt;a href="https://en.wikipedia.org/wiki/Least_significant_bit" rel="noopener noreferrer"&gt;Least Significant Bit (LSB)&lt;/a&gt; encoding. Here's why it works.&lt;/p&gt;

&lt;p&gt;A pixel in a 24-bit RGB image has three color channels, each stored as an 8-bit value (0-255). The last bit of each byte; the least significant bit; contributes the smallest possible change to the color value. Flipping it changes the channel value by exactly 1. The difference between RGB(142, 87, 203) and RGB(143, 87, 202) is invisible to the human eye.&lt;/p&gt;

&lt;p&gt;So you take your secret message, convert it to binary, and replace the LSBs of the image pixels with your message bits. Each pixel gives you 3 bits of storage (one per channel). A 1920x1080 image has 2,073,600 pixels; that's 6,220,800 bits of storage, or roughly 760 KB of hidden data. In practice you'd use far less to avoid detection, but the theoretical capacity is enormous.&lt;/p&gt;

&lt;p&gt;Here's a minimal Python implementation to make this concrete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PIL&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;text_to_bits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Convert text to a binary string with a null terminator.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;bits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;ord&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;08b&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bits&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;00000000&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;# null terminator to mark end of message
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;bits&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hide_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Hide a message in the LSBs of an image.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RGB&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;pixels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;flat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pixels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;bits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;text_to_bits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Message too long: need &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; bits, have &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bit&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Clear the LSB, then set it to our message bit
&lt;/span&gt;        &lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="mh"&gt;0xFE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;stego&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pixels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromarray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stego&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;uint8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PNG&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hidden &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; chars in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; bits (&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;% of capacity)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;extract_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Extract a hidden message from the LSBs of an image.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RGB&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;flat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;bits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;chars&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;byte&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;byte&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;00000000&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="n"&gt;chars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;chr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chars&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Usage
&lt;/span&gt;&lt;span class="nf"&gt;hide_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cat.png&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;The flag is CTF{hidden_in_plain_sight}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;stego_cat.png&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;extract_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;stego_cat.png&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key operation is on line 18: &lt;code&gt;(flat[i] &amp;amp; 0xFE) | int(bit)&lt;/code&gt;. The bitwise AND with &lt;code&gt;0xFE&lt;/code&gt; (11111110 in binary) clears the LSB, and the OR sets it to whatever our message bit is. That's the entire trick. Everything else is just converting text to bits and iterating over pixels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hiding a message step by step
&lt;/h2&gt;

&lt;p&gt;If you don't want to write Python; or you're on a machine where you can't install PIL; the &lt;a href="https://kitmul.com/en/security/steganography-tool" rel="noopener noreferrer"&gt;Steganography Tool on Kitmul&lt;/a&gt; does the same thing in your browser. No uploads, no server processing. The image never leaves your device.&lt;/p&gt;

&lt;p&gt;Here's the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Upload a carrier image.&lt;/strong&gt; PNG works best because it's lossless. JPEG compression will destroy your hidden bits; more on that later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type your secret message.&lt;/strong&gt; The tool shows you the available capacity based on your image dimensions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Download the stego image.&lt;/strong&gt; It looks identical to the original. Pixel-for-pixel, the differences are imperceptible.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To extract, switch to Reveal mode, upload the stego image, and the hidden text appears.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvad7owks3qgv4lnyyla.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvad7owks3qgv4lnyyla.webp" alt="The steganography tool showing Hide mode with upload area, secret message input, and download button" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The entire operation runs client-side using the Canvas API and typed arrays. Your secret message never touches a network connection. This matters; if you're hiding sensitive information, sending it to a third-party server rather defeats the purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detection and steganalysis; how to NOT get caught
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Steganalysis" rel="noopener noreferrer"&gt;Steganalysis&lt;/a&gt; is the art of detecting steganography, and it's more sophisticated than you might think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual inspection&lt;/strong&gt; won't catch LSB encoding. But statistical analysis will. The simplest test is a chi-squared analysis of pixel value distributions. In a natural image, pixel values have a characteristic distribution. LSB embedding flattens pairs of values (e.g., 142 and 143 become equally probable), which shows up as an anomaly in the histogram.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://github.com/DominicBreuker/stego-toolkit" rel="noopener noreferrer"&gt;StegExpose&lt;/a&gt; and &lt;a href="https://www.openstego.com/" rel="noopener noreferrer"&gt;OpenStego&lt;/a&gt; include detection modules. In competitive CTF environments, &lt;code&gt;zsteg&lt;/code&gt; and &lt;code&gt;steghide&lt;/code&gt; are the go-to extraction tools.&lt;/p&gt;

&lt;p&gt;Here are practical tips if you want your stego image to survive scrutiny:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use less capacity.&lt;/strong&gt; Embedding data in only 10-20% of available pixels makes statistical detection much harder. Using 100% of capacity is a forensic red flag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Randomize pixel selection.&lt;/strong&gt; Instead of writing sequentially from pixel 0, use a pseudorandom sequence seeded by a password to select which pixels carry data. This distributes the modifications uniformly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose busy images.&lt;/strong&gt; Photos with lots of texture, noise, and color variation hide LSB changes better than smooth gradients or solid blocks. A photo of a forest floor beats a photo of a white wall.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never use JPEG as a carrier.&lt;/strong&gt; JPEG's lossy compression modifies pixel values during encoding. Your hidden bits will be destroyed when the image is saved. Always use PNG, BMP, or TIFF for LSB steganography.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strip metadata before sharing.&lt;/strong&gt; EXIF data showing the image was processed by a steganography tool is an obvious giveaway.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to combine steganography with encryption
&lt;/h2&gt;

&lt;p&gt;Steganography and encryption solve different problems, and the strongest approach uses both. Here's why.&lt;/p&gt;

&lt;p&gt;If an attacker suspects your image contains hidden data and successfully extracts the LSBs, they'll see your plaintext message. Game over. But if you encrypt the message first; using AES-256, for example; the extracted bits look like random noise. The attacker can't tell whether they found a message or just normal image data.&lt;/p&gt;

&lt;p&gt;The practical workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Encrypt your message with a strong symmetric cipher.&lt;/li&gt;
&lt;li&gt;Hide the ciphertext in the image using LSB encoding.&lt;/li&gt;
&lt;li&gt;Share the stego image publicly.&lt;/li&gt;
&lt;li&gt;Share the decryption key through a separate channel.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This gives you two layers of protection: the message is both hidden (steganography) and unreadable (encryption). An attacker would need to both detect the hidden data and crack the encryption; a significantly harder problem than either alone.&lt;/p&gt;

&lt;p&gt;Kitmul's &lt;a href="https://dev.to/en/security"&gt;Security and Cryptography tools&lt;/a&gt; include AES encryption, hash generators, and other utilities that complement the steganography tool for this exact workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world use cases
&lt;/h2&gt;

&lt;p&gt;Steganography isn't just a CTF party trick. It has legitimate and important applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Digital watermarking.&lt;/strong&gt; Publishers, photographers, and media companies embed invisible watermarks in images to track unauthorized distribution. If a leaked image surfaces, the embedded watermark identifies which recipient leaked it. This is how several major movie studios track screener copies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Whistleblowing and censorship resistance.&lt;/strong&gt; In countries with heavy internet surveillance, steganography allows activists to share information through innocent-looking images posted on social media. The image passes inspection by automated content filters; the hidden message reaches its intended audience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Covert communication.&lt;/strong&gt; Intelligence agencies have used steganography since at least the early 2000s. The FBI's 2010 arrest of Russian spies (the "Illegals Program") revealed they were embedding encrypted messages in images posted to public websites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CTF challenges.&lt;/strong&gt; Capture The Flag competitions love steganography because it tests a different skill set than standard crypto challenges. You need to identify that steganography was used at all before you can begin extraction. Common CTF stego techniques include LSB encoding, appended data, palette manipulation in GIF files, and audio spectrum hiding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proof of ownership.&lt;/strong&gt; Artists and content creators can embed copyright information or ownership proofs directly into their work. Unlike visible watermarks, these don't degrade the visual quality of the piece.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zgodgsof6kh6fxd8fu4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zgodgsof6kh6fxd8fu4.webp" alt="A system of linear equations on paper; the math behind steganography is simpler than it looks" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The limits of pixel encoding
&lt;/h2&gt;

&lt;p&gt;LSB steganography has real constraints worth understanding. The carrier image must be lossless; any lossy compression step (JPEG, WebP lossy) will corrupt embedded data. The message capacity scales with image dimensions, but using more than about 15-20% of capacity makes the image vulnerable to statistical detection. And the technique only hides data; it doesn't protect it from extraction by someone who knows what to look for.&lt;/p&gt;

&lt;p&gt;For most practical purposes, image steganography works best as one layer in a defense-in-depth strategy. Hide the message, encrypt the content, and use a secure channel to share the key. No single technique is bulletproof, but the combination raises the bar significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/security/steganography-tool" rel="noopener noreferrer"&gt;Steganography Tool&lt;/a&gt; on Kitmul lets you hide and reveal messages in PNG images directly in your browser. Everything runs client-side; no data is uploaded, no accounts required, no limits. Upload an image, type a message, download the result. Then try extracting it to verify the round trip.&lt;/p&gt;

&lt;p&gt;If you're interested in security and privacy tools, the &lt;a href="https://dev.to/en/security"&gt;Security and Cryptography collection&lt;/a&gt; includes hash generators, encryption tools, password generators, and more; all running locally in your browser.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;All processing runs locally in your browser. No images or messages are sent to any server. The tool is free, open, and has no accounts or limits.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I Stopped Paying for Subtitle Services After Running Whisper in a Browser Tab</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Tue, 28 Apr 2026 17:05:00 +0000</pubDate>
      <link>https://forem.com/aralroca/i-stopped-paying-for-subtitle-services-after-running-whisper-in-a-browser-tab-1bd8</link>
      <guid>https://forem.com/aralroca/i-stopped-paying-for-subtitle-services-after-running-whisper-in-a-browser-tab-1bd8</guid>
      <description>&lt;p&gt;Last Tuesday I needed subtitles for a 12-minute product demo. The video was in English, the audience was international, and the deadline was two hours away.&lt;/p&gt;

&lt;p&gt;My first instinct was the usual cloud suspects. Rev quoted me $1.50 per minute with a 24-hour turnaround. Descript wanted a subscription. Happy Scribe's free tier maxed out at one minute. Even YouTube's auto-captions required uploading, waiting for processing, then manually downloading the .srt file from Studio; a workflow designed for YouTube, not for the rest of the internet.&lt;/p&gt;

&lt;p&gt;Then I thought: OpenAI released &lt;a href="https://github.com/openai/whisper" rel="noopener noreferrer"&gt;Whisper&lt;/a&gt; as open source in 2022. It's been ported to &lt;a href="https://onnxruntime.ai/" rel="noopener noreferrer"&gt;ONNX&lt;/a&gt;, to &lt;a href="https://developer.apple.com/documentation/coreml" rel="noopener noreferrer"&gt;Core ML&lt;/a&gt;, to &lt;a href="https://webassembly.org/" rel="noopener noreferrer"&gt;WebAssembly&lt;/a&gt;. If someone can run Stable Diffusion in a browser tab, running a speech-to-text model shouldn't be harder.&lt;/p&gt;

&lt;p&gt;So I built one. The &lt;a href="https://kitmul.com/en/ai/automatic-subtitle-generator" rel="noopener noreferrer"&gt;Automatic Subtitle Generator&lt;/a&gt; on Kitmul runs Whisper entirely in your browser. No upload, no account, no subscription. Drop a video, get a .vtt or .srt file.&lt;/p&gt;

&lt;p&gt;Here's how it works under the hood and why the privacy angle matters more than most people realize.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Whisper works (the short version)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openai.com/index/whisper/" rel="noopener noreferrer"&gt;Whisper&lt;/a&gt; is an encoder-decoder &lt;a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)" rel="noopener noreferrer"&gt;transformer&lt;/a&gt; trained on 680,000 hours of multilingual audio. The encoder converts a log-mel spectrogram of the audio into a sequence of embeddings. The decoder autoregressively generates text tokens, predicting each word based on the audio context and all previous tokens.&lt;/p&gt;

&lt;p&gt;The clever part is the training data. Instead of curating a clean dataset, OpenAI scraped the internet for audio paired with existing transcripts; YouTube videos with community captions, podcast show notes, audiobooks with their text counterparts. The sheer volume of noisy, real-world data is what gives Whisper its robustness. It handles accents, background music, and cross-talk far better than older models that were trained on read speech in quiet rooms.&lt;/p&gt;

&lt;p&gt;The model comes in sizes from &lt;code&gt;tiny&lt;/code&gt; (39M parameters) to &lt;code&gt;large-v3&lt;/code&gt; (1.5B parameters). The browser version uses a quantized variant that balances accuracy with the memory constraints of running inside a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API" rel="noopener noreferrer"&gt;Web Worker&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The browser pipeline
&lt;/h2&gt;

&lt;p&gt;When you drop a video into the &lt;a href="https://kitmul.com/en/ai/automatic-subtitle-generator" rel="noopener noreferrer"&gt;subtitle generator&lt;/a&gt;, this is what happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Audio extraction.&lt;/strong&gt; The video's audio track is separated using &lt;a href="https://ffmpegwasm.netlify.app/" rel="noopener noreferrer"&gt;FFmpeg compiled to WebAssembly&lt;/a&gt;. No server round-trip; the demuxing happens in your browser's memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resampling.&lt;/strong&gt; Whisper expects 16kHz mono audio. If the source is 44.1kHz stereo (most videos), an &lt;code&gt;OfflineAudioContext&lt;/code&gt; handles the conversion via the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API" rel="noopener noreferrer"&gt;Web Audio API&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chunked inference.&lt;/strong&gt; The audio is split into 30-second chunks (Whisper's attention window) and fed through the ONNX model inside a Web Worker. This keeps the main thread responsive; you can scroll the page while transcription runs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timestamp alignment.&lt;/strong&gt; Whisper produces word-level timestamps. The tool merges these into subtitle segments of 1-3 lines, keeping each segment under 42 characters wide (the &lt;a href="https://www.bbc.co.uk/accessibility/forproducts/guides/subtitles/" rel="noopener noreferrer"&gt;BBC subtitle guidelines&lt;/a&gt; standard for readability).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Format export.&lt;/strong&gt; You choose &lt;a href="https://www.w3.org/TR/webvtt1/" rel="noopener noreferrer"&gt;WebVTT&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/SubRip" rel="noopener noreferrer"&gt;SRT&lt;/a&gt;. Both are plain text. Both work everywhere.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first run downloads the model weights (~40-80MB depending on language). After that, your browser caches them. Subsequent runs start almost instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffiiob0fs4kdxwguovkpm.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffiiob0fs4kdxwguovkpm.webp" alt="A laptop screen showing a video editing timeline in a creative workspace" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why client-side subtitles matter
&lt;/h2&gt;

&lt;p&gt;The privacy argument is straightforward, but people underestimate how many scenarios it covers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legal depositions.&lt;/strong&gt; Law firms transcribe testimony recordings. Uploading those to Rev or Otter.ai means a third party now has access to privileged communications. Every cloud transcription service's terms of service includes some version of "we may use your content to improve our models." Even if they don't today, you've already uploaded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-release content.&lt;/strong&gt; Marketing teams subtitle product demos before launch. Internal training videos contain unreleased features. If your competitor's PR team is monitoring cloud transcription APIs (and some do; it's surprisingly cheap to scrape aggregated anonymized data), you've just leaked your roadmap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medical and therapeutic contexts.&lt;/strong&gt; Therapists recording sessions for supervision. Doctors dictating notes. HIPAA doesn't care that the transcription service promises encryption; if PHI leaves your device, you need a BAA in place. Running locally sidesteps the entire compliance question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Journalism.&lt;/strong&gt; Source protection is sacred. Uploading an interview with a whistleblower to any cloud service, no matter how reputable, creates a copy outside your control.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/blog/why-client-side-tools-are-more-private" rel="noopener noreferrer"&gt;Kitmul approach to privacy&lt;/a&gt; applies the same principle across all its tools: if the computation can run locally, it should.&lt;/p&gt;

&lt;h2&gt;
  
  
  The accuracy question
&lt;/h2&gt;

&lt;p&gt;Let's be honest. Whisper in a browser is not going to match a dedicated GPU running the full &lt;code&gt;large-v3&lt;/code&gt; model. But the gap is smaller than you'd expect.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;clear, single-speaker English audio&lt;/strong&gt; (the most common use case for product demos, tutorials, and course videos), the browser version produces usable subtitles roughly 93-95% of the time. Most errors are homophones ("their" vs "there") or uncommon proper nouns.&lt;/p&gt;

&lt;p&gt;Where it struggles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Heavy accents + background noise.&lt;/strong&gt; A speaker with a strong regional accent in a noisy cafe will produce more errors than the same speaker in a quiet room.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple overlapping speakers.&lt;/strong&gt; Whisper was trained primarily on single-speaker audio. Crosstalk confuses the decoder.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-specific jargon.&lt;/strong&gt; Medical terminology, legal Latin, or niche technical vocabulary that didn't appear frequently enough in the training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Very long silent gaps.&lt;/strong&gt; Extended silence can cause the model to hallucinate repeated phrases; a known Whisper behavior documented in the &lt;a href="https://cdn.openai.com/papers/whisper.pdf" rel="noopener noreferrer"&gt;OpenAI research paper&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For professional broadcast work, you'll still want &lt;a href="https://www.rev.com/" rel="noopener noreferrer"&gt;human review&lt;/a&gt;. But for social media, internal videos, educational content, and quick drafts, browser-based Whisper is genuinely good enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  85% of social media videos play on mute
&lt;/h2&gt;

&lt;p&gt;This statistic from &lt;a href="https://digiday.com/" rel="noopener noreferrer"&gt;Digiday&lt;/a&gt; keeps getting cited because it keeps being true. On Instagram, TikTok, LinkedIn, and Twitter/X, the default playback is muted. If your video doesn't have burned-in captions, most viewers will scroll past it.&lt;/p&gt;

&lt;p&gt;The subtitle generator doesn't just export .srt files. It can &lt;a href="https://kitmul.com/en/ai/automatic-subtitle-generator" rel="noopener noreferrer"&gt;burn subtitles directly into the video&lt;/a&gt; using FFmpeg.wasm, with customizable font size, color, and background opacity. The output is a new MP4 with permanent, embedded captions; no player support needed, no separate file to upload.&lt;/p&gt;

&lt;p&gt;For content creators who post across multiple platforms, this is the fastest workflow I've found: drop video, wait 2-3 minutes, customize caption style, download captioned video. One tool, zero context switches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jt2d8bbmujln1f3yj1k.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jt2d8bbmujln1f3yj1k.webp" alt="A content creator setting up camera equipment in a studio" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Subtitle formats: VTT vs SRT
&lt;/h2&gt;

&lt;p&gt;If you're not sure which format to use, here's the practical difference:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;WebVTT (.vtt)&lt;/th&gt;
&lt;th&gt;SubRip (.srt)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Styling&lt;/td&gt;
&lt;td&gt;Supports CSS-like styling, positioning&lt;/td&gt;
&lt;td&gt;Plain text only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web players&lt;/td&gt;
&lt;td&gt;Native HTML5 &lt;code&gt;&amp;lt;track&amp;gt;&lt;/code&gt; support&lt;/td&gt;
&lt;td&gt;Requires parser&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;YouTube&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Social media&lt;/td&gt;
&lt;td&gt;Varies&lt;/td&gt;
&lt;td&gt;Widely supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Metadata&lt;/td&gt;
&lt;td&gt;Headers, comments, notes&lt;/td&gt;
&lt;td&gt;Sequence numbers only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Spec&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.w3.org/TR/webvtt1/" rel="noopener noreferrer"&gt;W3C standard&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;De facto standard&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Rule of thumb:&lt;/strong&gt; use VTT for web-based players and HTML5 video, SRT for everything else. Both are editable in any text editor, so converting between them is trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  The accessibility angle nobody talks about
&lt;/h2&gt;

&lt;p&gt;Subtitles aren't just a growth hack. Under the &lt;a href="https://www.ada.gov/" rel="noopener noreferrer"&gt;Americans with Disabilities Act&lt;/a&gt; and the &lt;a href="https://ec.europa.eu/social/main.jsp?catId=1202" rel="noopener noreferrer"&gt;European Accessibility Act&lt;/a&gt; (which took full effect in June 2025), video content published by businesses must be accessible to people who are deaf or hard of hearing. This applies to websites, apps, and social media.&lt;/p&gt;

&lt;p&gt;Most small businesses and solo creators don't subtitle their videos because the cost and friction are too high. A free, instant, no-upload tool removes that excuse entirely.&lt;/p&gt;

&lt;p&gt;If accessibility is part of your workflow, the &lt;a href="https://kitmul.com/en/design-css/accessibility-tree-visualizer" rel="noopener noreferrer"&gt;Accessibility Tree Visualizer&lt;/a&gt; on Kitmul can help you audit your web content's ARIA structure alongside your captioning efforts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Complementary workflow tools
&lt;/h2&gt;

&lt;p&gt;Subtitles are one step in a content pipeline. Here's how other Kitmul tools fit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://kitmul.com/en/video/extract-audio-from-video" rel="noopener noreferrer"&gt;Extract Audio from Video&lt;/a&gt;&lt;/strong&gt; — Pull the audio track from any video before processing. Useful if you want to run &lt;a href="https://kitmul.com/en/writing/speech-to-text" rel="noopener noreferrer"&gt;Speech to Text&lt;/a&gt; separately for a full transcript.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://kitmul.com/en/video/video-trimmer" rel="noopener noreferrer"&gt;Video Trimmer&lt;/a&gt;&lt;/strong&gt; — Cut your video to the relevant segment before generating subtitles. Processing a 2-minute clip is faster than a 30-minute recording.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://kitmul.com/en/music/audio-stem-splitter" rel="noopener noreferrer"&gt;Audio Stem Splitter&lt;/a&gt;&lt;/strong&gt; — Isolate vocals from background music before transcription. Cleaner audio input produces more accurate subtitles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://kitmul.com/en/ai/text-readability-scorer" rel="noopener noreferrer"&gt;Text Readability Scorer&lt;/a&gt;&lt;/strong&gt; — Paste your subtitle text to check if the language is appropriate for your audience's reading level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://kitmul.com/en/ai/keyword-extractor" rel="noopener noreferrer"&gt;Keyword Extractor&lt;/a&gt;&lt;/strong&gt; — Pull keywords from your transcript for SEO metadata, video tags, and content optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these run in your browser. No uploads. No accounts. They compose well because they all operate on the same principle: your data stays on your device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/ai/automatic-subtitle-generator" rel="noopener noreferrer"&gt;Automatic Subtitle Generator&lt;/a&gt; is free. Drop an MP4, WebM, MOV, or MKV file. Choose your language (or let the AI auto-detect from &lt;a href="https://github.com/openai/whisper#available-models-and-languages" rel="noopener noreferrer"&gt;90+ supported languages&lt;/a&gt;). Wait a couple of minutes. Download your subtitles or your captioned video.&lt;/p&gt;

&lt;p&gt;No account. No upload. No watermark. No daily limit.&lt;/p&gt;

&lt;p&gt;If you're building content at scale and want the transcript as raw text instead of timed subtitles, the &lt;a href="https://kitmul.com/en/writing/speech-to-text" rel="noopener noreferrer"&gt;Speech to Text&lt;/a&gt; tool handles that use case with the same Whisper model.&lt;/p&gt;

</description>
      <category>webassembly</category>
      <category>javascript</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Three Ways to Convert JSON to TypeScript. Only One Is Deterministic.</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Mon, 27 Apr 2026 21:55:12 +0000</pubDate>
      <link>https://forem.com/aralroca/three-ways-to-convert-json-to-typescript-only-one-is-deterministic-1h59</link>
      <guid>https://forem.com/aralroca/three-ways-to-convert-json-to-typescript-only-one-is-deterministic-1h59</guid>
      <description>&lt;p&gt;There are three ways to turn a JSON response into TypeScript interfaces. You can write them by hand, you can ask an LLM, or you can run the JSON through a deterministic converter. I've used all three. Two of them have failure modes that most people don't think about until they ship a bug.&lt;/p&gt;

&lt;h2&gt;
  
  
  The manual approach: slow and accurate until it isn't
&lt;/h2&gt;

&lt;p&gt;Writing interfaces by hand works when you have three fields. It stops working around field fifteen. A &lt;a href="https://stripe.com/docs/api/charges/object" rel="noopener noreferrer"&gt;Stripe charge object&lt;/a&gt; has 40+ properties. A &lt;a href="https://docs.github.com/en/rest/pulls/pulls#get-a-pull-request" rel="noopener noreferrer"&gt;GitHub pull request&lt;/a&gt; response is over 100 fields deep once you count nested objects. Nobody types those by hand without making mistakes.&lt;/p&gt;

&lt;p&gt;The failure mode is subtle. You open the API docs, you start writing, and by field twenty you're skimming. Was &lt;code&gt;merged_at&lt;/code&gt; a string or a Date? Is &lt;code&gt;labels&lt;/code&gt; an array of objects or an array of strings? You guess, you move on, and TypeScript's compiler trusts whatever you wrote. The type system only catches errors if the types are correct in the first place.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#any" rel="noopener noreferrer"&gt;TypeScript documentation&lt;/a&gt; puts it plainly: &lt;code&gt;any&lt;/code&gt; disables type checking for that value. But a wrong interface is arguably worse than &lt;code&gt;any&lt;/code&gt;, because it gives you false confidence. Your IDE autocompletes fields that don't exist. Your code compiles. The crash happens at runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  The LLM approach: fast and probabilistic
&lt;/h2&gt;

&lt;p&gt;Pasting a JSON blob into ChatGPT or &lt;a href="https://claude.ai" rel="noopener noreferrer"&gt;Claude&lt;/a&gt; and asking for TypeScript interfaces is tempting. It's fast. It handles nesting. It even names interfaces in reasonable ways most of the time.&lt;/p&gt;

&lt;p&gt;The problem is that LLMs are &lt;a href="https://en.wikipedia.org/wiki/Language_model" rel="noopener noreferrer"&gt;probabilistic&lt;/a&gt;. Give the same JSON to the same model twice and you might get different output. Sometimes it adds &lt;code&gt;?&lt;/code&gt; to fields that aren't optional. Sometimes it invents a union type that doesn't match the data. Sometimes it decides &lt;code&gt;id&lt;/code&gt; should be &lt;code&gt;string&lt;/code&gt; when the value is clearly &lt;code&gt;1&lt;/code&gt;. I've seen models produce &lt;code&gt;Date&lt;/code&gt; for ISO timestamp strings; technically aspirational, but wrong if you're not parsing the string into a Date object first.&lt;/p&gt;

&lt;p&gt;These aren't bugs in the model. It's the nature of the tool. An LLM generates plausible text based on patterns. It doesn't parse your JSON the way a type system does. It reads it, approximates what the types should be, and writes something that looks right. Mostly it is right. But "mostly right" and "deterministically correct" are different things when your type definitions guard runtime behavior.&lt;/p&gt;

&lt;p&gt;There's also the privacy angle. Pasting a production API response into a third-party LLM means sending your data to someone else's server. If that response contains user PII, internal endpoints, or auth tokens that leaked into the payload, you've just shared them with an external service. For side projects, nobody cares. For production codebases with compliance requirements, that's a conversation with your security team you don't want to have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deterministic approach: same input, same output, every time
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kitmul.com/en/developer/json-to-typescript" rel="noopener noreferrer"&gt;deterministic JSON-to-TypeScript converter&lt;/a&gt; doesn't guess. It parses. The algorithm walks the JSON tree, inspects each value's JavaScript type, and maps it to the corresponding TypeScript type. There's no randomness, no temperature parameter, no model that might behave differently on Thursday.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21aig6ot7i7qky1tz6uy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21aig6ot7i7qky1tz6uy.webp" alt="The JSON to TypeScript converter showing a nested user object with profile data, social links, and posts array; Monaco editor with syntax highlighting, interface/type toggle, and generated TypeScript output" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rules are mechanical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"hello"&lt;/code&gt; is always &lt;code&gt;string&lt;/code&gt;. Not sometimes &lt;code&gt;string&lt;/code&gt;, not occasionally &lt;code&gt;"hello"&lt;/code&gt; as a literal type.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;42&lt;/code&gt; is always &lt;code&gt;number&lt;/code&gt;. Not &lt;code&gt;int&lt;/code&gt;, not &lt;code&gt;float&lt;/code&gt;, not &lt;code&gt;number | string&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[1, 2, 3]&lt;/code&gt; is always &lt;code&gt;number[]&lt;/code&gt;. Not &lt;code&gt;Array&amp;lt;number&amp;gt;&lt;/code&gt;, not &lt;code&gt;number[] | undefined&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;{"a": 1}&lt;/code&gt; always generates a separate named interface with &lt;code&gt;a: number&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;null&lt;/code&gt; is always &lt;code&gt;null&lt;/code&gt;. Not &lt;code&gt;undefined&lt;/code&gt;, not omitted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same JSON in, same TypeScript out. Run it a hundred times and you get a hundred identical results. That's the property you want from a tool that generates type definitions your compiler will trust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft63imyxrdpzadjm1xbvq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft63imyxrdpzadjm1xbvq.webp" alt="The TypeScript output showing interface Root with typed fields, nested Profile and Social interfaces, and a PostsItem array type; the interface/type toggle lets you switch between both formats" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the algorithm actually does
&lt;/h2&gt;

&lt;p&gt;Under the hood, the converter does a recursive descent through your JSON structure. For every value it encounters, it calls &lt;code&gt;inferType()&lt;/code&gt;, which returns the TypeScript type string. Objects produce new interface entries in a &lt;code&gt;Map&lt;/code&gt;. Arrays inspect their elements and produce either a uniform type (&lt;code&gt;string[]&lt;/code&gt;) or a union type (&lt;code&gt;(string | number)[]&lt;/code&gt;). Empty arrays become &lt;code&gt;unknown[]&lt;/code&gt; because there's no element to infer from.&lt;/p&gt;

&lt;p&gt;Property names get converted to PascalCase for interface names. Keys that aren't valid JavaScript identifiers (hyphens, spaces, leading digits) get quoted automatically. The output can be toggled between &lt;code&gt;interface&lt;/code&gt; and &lt;code&gt;type&lt;/code&gt; declarations.&lt;/p&gt;

&lt;p&gt;Here's a concrete example. This JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Aral Roca"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aral@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"roles"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"editor"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"profile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bio"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Full-stack developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"avatar"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://example.com/avatar.png"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"social"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"github"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aralroca"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"twitter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aralroca"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"posts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Understanding TypeScript Interfaces"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"published"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"typescript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"tutorial"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"createdAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-27T10:00:00Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Produces exactly this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Root&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;active&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nl"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Profile&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PostsItem&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nl"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Profile&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;bio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;avatar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;social&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Social&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Social&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;twitter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;PostsItem&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;published&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four interfaces. Every field typed correctly. Every nested object extracted into its own named interface. No randomness involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interface vs. type: when the toggle matters
&lt;/h2&gt;

&lt;p&gt;The converter offers both &lt;code&gt;interface&lt;/code&gt; and &lt;code&gt;type&lt;/code&gt; output. The choice isn't cosmetic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.typescriptlang.org/docs/handbook/declaration-merging.html" rel="noopener noreferrer"&gt;Interfaces support declaration merging&lt;/a&gt;; if two interfaces share the same name in the same scope, TypeScript merges their properties. Types don't. For library authors who want consumers to extend types, interfaces are the better pick.&lt;/p&gt;

&lt;p&gt;Types handle unions, intersections, and mapped types more naturally. If you need &lt;code&gt;type Result = Success | Error&lt;/code&gt; or compose shapes with &lt;code&gt;&amp;amp;&lt;/code&gt;, the &lt;code&gt;type&lt;/code&gt; output saves a conversion step.&lt;/p&gt;

&lt;p&gt;For API response typing, it rarely matters. Pick whichever your team's linting rules enforce and move on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where deterministic inference still needs human review
&lt;/h2&gt;

&lt;p&gt;The converter infers types from values, not from schemas. That's a feature; it works with any JSON without requiring an &lt;a href="https://www.openapis.org/" rel="noopener noreferrer"&gt;OpenAPI spec&lt;/a&gt; or &lt;a href="https://json-schema.org/" rel="noopener noreferrer"&gt;JSON Schema&lt;/a&gt;. But it means there are edges where you'll want to adjust:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional fields.&lt;/strong&gt; The converter only sees the sample you provide. If a field is sometimes absent from the response, add &lt;code&gt;?&lt;/code&gt; manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;String enums.&lt;/strong&gt; &lt;code&gt;"status": "active"&lt;/code&gt; becomes &lt;code&gt;string&lt;/code&gt;, not &lt;code&gt;"active" | "inactive" | "suspended"&lt;/code&gt;. Narrow it yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Date strings.&lt;/strong&gt; ISO 8601 timestamps like &lt;code&gt;"2026-04-27T10:00:00Z"&lt;/code&gt; are &lt;code&gt;string&lt;/code&gt; to the converter. If you're parsing them with &lt;a href="https://date-fns.org/" rel="noopener noreferrer"&gt;date-fns&lt;/a&gt; or &lt;a href="https://day.js.org/" rel="noopener noreferrer"&gt;dayjs&lt;/a&gt;, you'll want to change those to &lt;code&gt;Date&lt;/code&gt; in your final types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pagination wrappers.&lt;/strong&gt; A response like &lt;code&gt;{ data: [...], meta: { page: 1, total: 100 } }&lt;/code&gt; generates a &lt;code&gt;Root&lt;/code&gt; interface with both. Rename it to &lt;code&gt;PaginatedResponse&amp;lt;T&amp;gt;&lt;/code&gt; and extract &lt;code&gt;Meta&lt;/code&gt; as a generic.&lt;/p&gt;

&lt;p&gt;These adjustments take seconds. The point is that the deterministic converter gives you a correct baseline; the parts that need human judgment are the parts a machine genuinely can't infer from a single sample. An LLM would also get these wrong; the difference is the LLM might also get the easy parts wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy as a feature, not a marketing line
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/developer/json-to-typescript" rel="noopener noreferrer"&gt;converter&lt;/a&gt; runs entirely client-side. The JSON never leaves your browser. No server call, no analytics on your input, no account.&lt;/p&gt;

&lt;p&gt;This isn't an abstract benefit. Plenty of teams have security policies that prohibit uploading source code or API responses to third-party services. That rules out most online tools. It rules out pasting production responses into LLM chatbots. A client-side converter that processes everything in a JavaScript function on your machine has zero compliance surface.&lt;/p&gt;

&lt;p&gt;Open your browser's network tab while using it. You'll see nothing sent.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Get a real response.&lt;/strong&gt; Use &lt;code&gt;curl&lt;/code&gt;, &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;, or your browser's network tab to capture an actual API response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Paste and convert.&lt;/strong&gt; Open the &lt;a href="https://kitmul.com/en/developer/json-to-typescript" rel="noopener noreferrer"&gt;JSON to TypeScript converter&lt;/a&gt;, paste the JSON, copy the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Rename and refine.&lt;/strong&gt; Change &lt;code&gt;Root&lt;/code&gt; to &lt;code&gt;UserResponse&lt;/code&gt;. Add &lt;code&gt;?&lt;/code&gt; where needed. Narrow string unions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Co-locate with your API client.&lt;/strong&gt; I put types in a &lt;code&gt;types.ts&lt;/code&gt; next to whatever file makes the &lt;code&gt;fetch&lt;/code&gt; or &lt;code&gt;axios&lt;/code&gt; call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Add runtime validation.&lt;/strong&gt; Use &lt;a href="https://zod.dev/" rel="noopener noreferrer"&gt;Zod&lt;/a&gt; or &lt;a href="https://valibot.dev/" rel="noopener noreferrer"&gt;Valibot&lt;/a&gt; to validate that the API actually sends what your types describe. The converter gives you structure; a schema library gives you runtime guarantees.&lt;/p&gt;

&lt;p&gt;The whole thing takes under a minute per endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond API responses
&lt;/h2&gt;

&lt;p&gt;The converter handles any valid JSON:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Config files.&lt;/strong&gt; Paste &lt;code&gt;tsconfig.json&lt;/code&gt; or &lt;code&gt;package.json&lt;/code&gt; for type-safe config loading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database exports.&lt;/strong&gt; A MongoDB document or PostgreSQL row as JSON becomes your ORM layer types.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test fixtures.&lt;/strong&gt; If you write tests with &lt;a href="https://jestjs.io/" rel="noopener noreferrer"&gt;Jest&lt;/a&gt; or &lt;a href="https://vitest.dev/" rel="noopener noreferrer"&gt;Vitest&lt;/a&gt;, converting fixture files ensures your mocks match production shapes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CMS content.&lt;/strong&gt; Headless CMS responses from &lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;Strapi&lt;/a&gt;, &lt;a href="https://www.sanity.io/" rel="noopener noreferrer"&gt;Sanity&lt;/a&gt;, or &lt;a href="https://www.contentful.com/" rel="noopener noreferrer"&gt;Contentful&lt;/a&gt; are deeply nested. Type them once; let the compiler catch template bugs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For formatting JSON before converting, the &lt;a href="https://kitmul.com/en/developer/json-formatter" rel="noopener noreferrer"&gt;JSON Formatter&lt;/a&gt; handles pretty-printing and validation. For the opposite direction; stripping HTML into something an LLM can process efficiently; there's the &lt;a href="https://kitmul.com/en/writing/html-to-markdown" rel="noopener noreferrer"&gt;HTML to Markdown converter&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tradeoff matrix
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Manual&lt;/th&gt;
&lt;th&gt;LLM&lt;/th&gt;
&lt;th&gt;Deterministic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;Slow&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Correctness&lt;/td&gt;
&lt;td&gt;Depends on you&lt;/td&gt;
&lt;td&gt;Mostly correct&lt;/td&gt;
&lt;td&gt;Always correct for the sample&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consistency&lt;/td&gt;
&lt;td&gt;Varies&lt;/td&gt;
&lt;td&gt;Non-deterministic&lt;/td&gt;
&lt;td&gt;Identical every run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privacy&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Data sent to server&lt;/td&gt;
&lt;td&gt;Client-side only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optional fields&lt;/td&gt;
&lt;td&gt;You decide&lt;/td&gt;
&lt;td&gt;Sometimes guesses&lt;/td&gt;
&lt;td&gt;You decide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;String narrowing&lt;/td&gt;
&lt;td&gt;You decide&lt;/td&gt;
&lt;td&gt;Sometimes guesses&lt;/td&gt;
&lt;td&gt;You decide&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The deterministic converter handles the mechanical part; mapping values to types; perfectly. The parts it can't handle (optionality, string enums, date parsing) are the same parts the other approaches also can't handle reliably. The difference is it doesn't introduce new errors on the parts it can handle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;Type safety isn't a spectrum. Your types are either correct or they're not. Manual typing is slow and error-prone at scale. LLM typing is fast but probabilistic. Deterministic conversion is fast and correct; within the bounds of what any tool can infer from a single JSON sample.&lt;/p&gt;

&lt;p&gt;Use the &lt;a href="https://kitmul.com/en/developer/json-to-typescript" rel="noopener noreferrer"&gt;JSON to TypeScript converter&lt;/a&gt; for the mechanical work. Spend your judgment on optional fields, string unions, and interface naming; the decisions that require context no tool has.&lt;/p&gt;

&lt;p&gt;Zero signup. Zero upload. Same input, same output. Part of the &lt;a href="https://kitmul.com/en/developer" rel="noopener noreferrer"&gt;Developer &amp;amp; Programming Utilities&lt;/a&gt; on Kitmul.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@florianolv" rel="noopener noreferrer"&gt;Florian Olivo&lt;/a&gt; on &lt;a href="https://unsplash.com" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>api</category>
    </item>
    <item>
      <title>My Friends Spent 14 Minutes Deciding Where to Go Train. A Wheel Spinner Fixed It in 3 Seconds.</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:12:25 +0000</pubDate>
      <link>https://forem.com/aralroca/my-friends-spent-14-minutes-deciding-where-to-go-train-a-wheel-spinner-fixed-it-in-3-seconds-5f50</link>
      <guid>https://forem.com/aralroca/my-friends-spent-14-minutes-deciding-where-to-go-train-a-wheel-spinner-fixed-it-in-3-seconds-5f50</guid>
      <description>&lt;p&gt;Last Tuesday my friends spent 14 minutes deciding where to go train. Fourteen minutes. Four of us, three opinions about that rooftop downtown, one person who wanted the park with the low walls, and nobody willing to commit because nobody wanted to be the one who picked wrong.&lt;/p&gt;

&lt;p&gt;I opened a browser tab, typed six parkour spots into a &lt;a href="https://kitmul.com/en/random/spin-the-wheel" rel="noopener noreferrer"&gt;spin the wheel tool&lt;/a&gt;, and hit spin. The wheel landed on the bridge underpass. Everyone shrugged. We went. The session was solid. Nobody complained.&lt;/p&gt;

&lt;p&gt;That 14-minute argument wasn't about training spots. It was about &lt;a href="https://en.wikipedia.org/wiki/Decision_fatigue" rel="noopener noreferrer"&gt;decision fatigue&lt;/a&gt;; the slow erosion of your ability to make good choices after making too many mediocre ones. And it happens constantly in contexts far more consequential than where to practice your kongs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why humans are terrible at random selection
&lt;/h2&gt;

&lt;p&gt;We think we're good at picking things randomly. We're not. Ask someone to pick a random number between 1 and 10, and they'll disproportionately choose 7. Ask a teacher to cold-call students "randomly" during class, and they'll unconsciously favor the kids who make eye contact, sit up front, or haven't been called recently. The bias is invisible to the person doing the picking.&lt;/p&gt;

&lt;p&gt;This isn't a character flaw. It's architecture. Human brains evolved to find patterns, not to generate randomness. A &lt;a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0041531" rel="noopener noreferrer"&gt;study published in PLOS ONE&lt;/a&gt; showed that when people try to produce random sequences, they systematically avoid repeats and clusters; the exact features that genuine randomness produces. We're so bad at it that researchers use human-generated "random" sequences as a test for cognitive bias.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Barry_Schwartz_(psychologist)" rel="noopener noreferrer"&gt;Barry Schwartz&lt;/a&gt; documented the downstream effect in &lt;em&gt;The Paradox of Choice&lt;/em&gt;: when people face too many options, they either freeze (analysis paralysis) or choose and then ruminate about whether they chose wrong. His &lt;a href="https://works.swarthmore.edu/fac-psychology/198/" rel="noopener noreferrer"&gt;research at Swarthmore&lt;/a&gt; found that "maximizers"; people who need to evaluate every option before committing; report significantly less satisfaction with their decisions than "satisficers" who pick something good enough and move on.&lt;/p&gt;

&lt;p&gt;A random picker wheel is a satisficer machine. It removes the emotional weight from low-stakes decisions and hands it to probability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where a random wheel actually solves real problems
&lt;/h2&gt;

&lt;p&gt;I assumed spin-the-wheel tools were novelty toys until I started paying attention to how people actually use them. The use cases fall into three categories that are surprisingly distinct.&lt;/p&gt;

&lt;h3&gt;
  
  
  Classrooms
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52g2lkxwx5qdzjqscnzr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52g2lkxwx5qdzjqscnzr.webp" alt="A group collaborating in a classroom setting; the exact scenario where random selection beats raised hands" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Teachers have known for decades that cold-calling students improves engagement. The problem is that human-selected cold calls are biased. Teachers call on students who sit in a T-shape (front row plus center column) at &lt;a href="https://en.wikipedia.org/wiki/Classroom_management" rel="noopener noreferrer"&gt;roughly 3x the rate&lt;/a&gt; of students in the back corners. They call on boys more than girls. They avoid students who look anxious, which means the students who most need practice speaking never get it.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://dev.to/en/random/random-name-picker"&gt;random name picker&lt;/a&gt; fixes this mechanically. Put 30 names on the wheel, spin it, and whoever it lands on answers. The randomness is visible to the entire class; nobody can accuse the teacher of favoritism. Students in my network who teach middle school report that visible randomness ("the wheel picked you, not me") reduces pushback from students who don't want to be called on. The accountability shifts from the teacher to the mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team decisions and retrospectives
&lt;/h3&gt;

&lt;p&gt;Sprint retrospectives generate action items. Someone has to own each one. The politeness problem kicks in: nobody volunteers for the annoying tasks, and the same responsible people end up with disproportionate load. A wheel spin assigns ownership without the social dynamics.&lt;/p&gt;

&lt;p&gt;I've seen this work in standup meetings too. Instead of going in the same clockwise order every day (which means the same person always goes first while still waking up, and the same person always goes last when everyone is checked out), spin the wheel for speaking order. Random order keeps people alert because you don't know when your turn is coming.&lt;/p&gt;

&lt;p&gt;Pair programming rotations, code review assignments, who presents the demo to stakeholders; all of these benefit from randomization that a &lt;a href="https://dev.to/en/random/team-generator"&gt;team generator&lt;/a&gt; or a &lt;a href="https://dev.to/en/random/random-choice-picker"&gt;random choice picker&lt;/a&gt; handles in seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Giveaways and content creation
&lt;/h3&gt;

&lt;p&gt;If you've ever run a social media giveaway, you know the anxiety. Pick a winner manually and someone will accuse you of favoritism. Use a random picker on camera and the audience trusts the result because they watched the process. The wheel is theatrical in a way that a random number generator isn't. Nobody wants to watch someone click "generate" and read a number. People &lt;em&gt;do&lt;/em&gt; want to watch a wheel spin and slow down to a dramatic stop.&lt;/p&gt;

&lt;p&gt;Streamers, YouTubers, and event organizers use wheel spinners for this exact reason. The visual feedback is the product. A &lt;a href="https://dev.to/en/random/coin-flipper"&gt;coin flipper&lt;/a&gt; works for binary choices, and a &lt;a href="https://dev.to/en/random/dice-roller"&gt;dice roller&lt;/a&gt; works for numbered outcomes, but for named options with more than six entries, the wheel is the right interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Math.random() actually works under the hood
&lt;/h2&gt;

&lt;p&gt;Since the tool runs entirely in your browser, the randomness comes from JavaScript's &lt;code&gt;Math.random()&lt;/code&gt;. That function has an interesting history.&lt;/p&gt;

&lt;p&gt;Until 2015, Chrome's V8 engine used an algorithm called MWC1616 (multiply with carry) that was, frankly, terrible. It had only 2^32 possible states and failed multiple statistical randomness tests. The V8 team &lt;a href="https://v8.dev/blog/math-random" rel="noopener noreferrer"&gt;documented the replacement&lt;/a&gt; in detail: they switched to &lt;a href="https://en.wikipedia.org/wiki/Xorshift" rel="noopener noreferrer"&gt;xorshift128+&lt;/a&gt;, an algorithm with 2^128 - 1 possible states that passes every test in the TestU01 suite. Firefox and Safari adopted the same algorithm.&lt;/p&gt;

&lt;p&gt;Is it cryptographically secure? No. &lt;code&gt;Math.random()&lt;/code&gt; is a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random" rel="noopener noreferrer"&gt;pseudorandom number generator&lt;/a&gt;, not a cryptographic one. If you're generating encryption keys, use &lt;code&gt;crypto.getRandomValues()&lt;/code&gt;. But for picking a training spot or selecting a student to answer a question? xorshift128+ is more than sufficient. The distribution is uniform, the period is astronomically long, and no human will ever detect a pattern in the output.&lt;/p&gt;

&lt;p&gt;The wheel animation itself uses the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API" rel="noopener noreferrer"&gt;Canvas API&lt;/a&gt; to draw colored slices and an easing function for the spin deceleration. The result is determined before the animation starts; the wheel is rendering a predetermined outcome with dramatic timing, not simulating physics. This means the visual experience is satisfying but the randomness is decided instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfp2o17r20yh8xpxdr4v.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfp2o17r20yh8xpxdr4v.webp" alt="A team brainstorming session; sometimes the best decision is letting randomness decide for you" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The privacy argument
&lt;/h2&gt;

&lt;p&gt;Most spin-the-wheel tools online upload your entries to a server. Some of them store your data indefinitely. A few of the popular ones set tracking cookies from five different ad networks before you've even typed your first option.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/random/spin-the-wheel" rel="noopener noreferrer"&gt;Kitmul spin the wheel&lt;/a&gt; runs entirely in your browser. No entries leave your device. No server sees your student names, your team members' names, or your list of training spots. For teachers using student names; which are protected under &lt;a href="https://en.wikipedia.org/wiki/Family_Educational_Rights_and_Privacy_Act" rel="noopener noreferrer"&gt;FERPA&lt;/a&gt; in the US and similar regulations elsewhere; this isn't a nice-to-have. It's a compliance requirement that most online tools silently violate.&lt;/p&gt;

&lt;p&gt;The URL state persistence means you can bookmark a wheel configuration or share it as a link without any server-side storage. The options are encoded in the URL itself. Close the tab, open the bookmark, and your wheel is back.&lt;/p&gt;

&lt;h2&gt;
  
  
  When not to use a wheel
&lt;/h2&gt;

&lt;p&gt;Random selection is wrong when the decision actually has stakes. Don't use a wheel to decide which database migration to run first. Don't use it to pick which candidate to interview. Don't use it to allocate budget.&lt;/p&gt;

&lt;p&gt;The wheel works when the options are roughly equivalent in value and the cost of choosing "wrong" is near zero. Training spots. Speaking order. Homework review partners. Game night picks. Giveaway winners from a pre-qualified pool.&lt;/p&gt;

&lt;p&gt;If you catch yourself putting items on a wheel and hoping it doesn't land on one of them, that's your brain telling you the decision isn't actually random-appropriate. You have a preference. Honor it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 14-minute rule
&lt;/h2&gt;

&lt;p&gt;After the training spot incident, I started timing how long group decisions take when everyone has veto power and nobody has a mechanism. The median for a group of 4+ people choosing from 5+ options: 14 minutes. The median for the same group using a random picker: 30 seconds, including the argument about whether the result is "really random."&lt;/p&gt;

&lt;p&gt;That's 13.5 minutes saved. Multiply that by the number of low-stakes group decisions your team makes per week. For us it was about 6. That's 81 minutes per week; an entire Pomodoro block plus change; spent on decisions where the outcome genuinely didn't matter.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/random/spin-the-wheel" rel="noopener noreferrer"&gt;spin the wheel&lt;/a&gt; is free, runs in your browser, and doesn't touch a server. Type your options, spin, and move on to the work that actually matters.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Spin the Wheel is part of the &lt;a href="https://dev.to/en/random"&gt;Random Generators toolkit&lt;/a&gt; on Kitmul. See also: &lt;a href="https://dev.to/en/math/random-number-generator"&gt;Random Number Generator&lt;/a&gt;, &lt;a href="https://dev.to/en/random/rock-paper-scissors"&gt;Rock Paper Scissors&lt;/a&gt;, and &lt;a href="https://dev.to/en/random/spaced-repetition-flashcards"&gt;Spaced Repetition Flashcards&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Create 360 Panoramas with GPT Image 2 and View Them Interactively</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Sat, 25 Apr 2026 22:19:50 +0000</pubDate>
      <link>https://forem.com/aralroca/how-to-create-360-panoramas-with-gpt-image-2-and-view-them-interactively-21hb</link>
      <guid>https://forem.com/aralroca/how-to-create-360-panoramas-with-gpt-image-2-and-view-them-interactively-21hb</guid>
      <description>&lt;h2&gt;
  
  
  What you will learn
&lt;/h2&gt;

&lt;p&gt;This tutorial covers how to generate 360-degree equirectangular panorama images using &lt;a href="https://openai.com/index/introducing-chatgpt-images-2-0/" rel="noopener noreferrer"&gt;GPT Image 2&lt;/a&gt; (the image generation model inside ChatGPT) and how to view them in an &lt;a href="https://kitmul.com/en/image-design/interactive-360-photo-viewer" rel="noopener noreferrer"&gt;interactive 360 viewer&lt;/a&gt; that runs entirely in your browser.&lt;/p&gt;

&lt;p&gt;By the end you will know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What an equirectangular image is and why the format matters&lt;/li&gt;
&lt;li&gt;How to write prompts that produce usable 360 panoramas&lt;/li&gt;
&lt;li&gt;How to load the result into the viewer and explore it&lt;/li&gt;
&lt;li&gt;How to embed the viewer on your own website with a pre-loaded image&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is an equirectangular image
&lt;/h2&gt;

&lt;p&gt;A 360 panorama is stored as an &lt;strong&gt;equirectangular projection&lt;/strong&gt;; a flat rectangle that maps the full sphere of vision onto a 2:1 aspect ratio image. Think of it like a &lt;a href="https://en.wikipedia.org/wiki/Mercator_projection" rel="noopener noreferrer"&gt;Mercator world map&lt;/a&gt;, but for a room instead of the Earth.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The horizontal axis covers 360 degrees of longitude&lt;/li&gt;
&lt;li&gt;The vertical axis covers 180 degrees of latitude&lt;/li&gt;
&lt;li&gt;The left and right edges must stitch seamlessly; they represent the same point in space&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a 360 viewer loads this image, it wraps it onto the inside of a sphere using &lt;a href="https://threejs.org/" rel="noopener noreferrer"&gt;Three.js&lt;/a&gt; and places the camera at the center. You drag to rotate, scroll to zoom, and the flat image becomes an immersive scene.&lt;/p&gt;

&lt;p&gt;Professional cameras like the &lt;a href="https://theta360.com/en/about/theta/z1.html" rel="noopener noreferrer"&gt;Ricoh Theta Z1&lt;/a&gt; or &lt;a href="https://www.insta360.com/product/insta360-x4" rel="noopener noreferrer"&gt;Insta360 X4&lt;/a&gt; handle stitching automatically. When generating with AI, the model needs to produce that seamless wrap in a single pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Open ChatGPT with GPT Image 2
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://chatgpt.com" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; and make sure you are using a model that supports &lt;a href="https://openai.com/index/introducing-chatgpt-images-2-0/" rel="noopener noreferrer"&gt;GPT Image 2&lt;/a&gt; (GPT-4o with image generation enabled). GPT Image 2 is available on ChatGPT Plus, Team, and Enterprise plans.&lt;/p&gt;

&lt;p&gt;GPT Image 2 does not produce equirectangular projections by default. You need to be explicit about the format, the aspect ratio, and the seamless stitching requirement in every prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Write the prompt
&lt;/h2&gt;

&lt;p&gt;The prompt structure has three critical parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Scene description&lt;/strong&gt;: what you want to see&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Format specification&lt;/strong&gt;: equirectangular projection, 2:1 aspect ratio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless constraint&lt;/strong&gt;: left and right edges must stitch when wrapped onto a sphere&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Base prompt template
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a 360-degree equirectangular panorama image of [SCENE DESCRIPTION].
The image must use equirectangular projection with a 2:1 aspect ratio.
The left and right edges must stitch seamlessly when wrapped onto a sphere.
Photorealistic style. High detail. Even lighting across the full 360 degrees.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example prompt: Gothic cathedral interior
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a 360-degree equirectangular panorama image of the interior of a
gothic cathedral inspired by Sagrada Familia in Barcelona. Tall columns
branching into tree-like structures supporting the ceiling. Stained glass
windows casting colored light across stone surfaces. Warm afternoon light
filtering through the nave. The image must use equirectangular projection
with a 2:1 aspect ratio. The left and right edges must stitch seamlessly
when wrapped onto a sphere. Photorealistic style.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example prompt: Modern apartment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a 360-degree equirectangular panorama image of a modern minimalist
apartment living room. White walls, large floor-to-ceiling windows with
city skyline view, a gray sectional sofa, wooden coffee table, indoor
plants, and warm pendant lighting. Scandinavian design. The image must use
equirectangular projection with a 2:1 aspect ratio. The left and right
edges must stitch seamlessly when wrapped onto a sphere.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example prompt: Tropical beach
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a 360-degree equirectangular panorama image of a tropical beach
at sunset. Crystal clear turquoise water, white sand, palm trees swaying,
a wooden pier extending into the ocean, dramatic orange and pink sky. The
image must use equirectangular projection with a 2:1 aspect ratio. The
left and right edges must stitch seamlessly when wrapped onto a sphere.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Download the image
&lt;/h2&gt;

&lt;p&gt;After ChatGPT generates the image:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the generated image to open it full size&lt;/li&gt;
&lt;li&gt;Click the download button (arrow pointing down)&lt;/li&gt;
&lt;li&gt;Save it as JPEG or PNG; always grab the highest resolution available&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GPT Image 2 outputs images at around 2048x1024 pixels. Higher resolution means more detail when the image is stretched across the full 360-degree sphere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Load it into the interactive viewer
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://kitmul.com/en/image-design/interactive-360-photo-viewer" rel="noopener noreferrer"&gt;Interactive 360 Photo Viewer&lt;/a&gt; and drag your downloaded image onto the upload area. The viewer will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download and decode the image&lt;/li&gt;
&lt;li&gt;Map it onto a 3D sphere using WebGL&lt;/li&gt;
&lt;li&gt;Let you drag to look around, scroll to zoom, and toggle auto-rotate&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything runs locally in your browser. The image never leaves your device.&lt;/p&gt;

&lt;p&gt;Here is a live example; drag around to explore:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kitmul.com/image-design/interactive-360-photo-viewer?src=/images/360-example.jpg" rel="noopener noreferrer"&gt;&lt;br&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcdes3ibsdqwrde4xyhe.png"&gt;&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Tips for better results
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Always specify the 2:1 aspect ratio.&lt;/strong&gt; Without it, the model defaults to square or 16:9. These break the projection math when loaded into a viewer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Describe the full 360-degree scene.&lt;/strong&gt; Instead of "a room with a window", describe all four walls: "large windows on the north wall, a bookshelf on the east wall, a fireplace on the south wall, and a door on the west wall." The model produces better wraps when it understands the complete environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Request even lighting.&lt;/strong&gt; The most common failure is a visible seam where the left and right edges meet. This happens when the lighting differs drastically across the scene. Specifying "even lighting" or "diffuse ambient light" reduces seam artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Interiors work better than open landscapes.&lt;/strong&gt; Enclosed spaces (rooms, cathedrals, caves, studios) produce more consistent equirectangular results. Walls and ceilings anchor the perspective and reduce geometric warping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Iterate with follow-up prompts.&lt;/strong&gt; If the result has a visible seam or distorted geometry, tell ChatGPT exactly what went wrong: "The left and right edges don't match; there's a visible break in the wall texture. Regenerate with seamless horizontal stitching." GPT Image 2 responds well to specific corrections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Download at full resolution.&lt;/strong&gt; Lower resolution images lose detail when stretched across a sphere. Every pixel counts in equirectangular projection.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to embed the viewer on your own site
&lt;/h2&gt;

&lt;p&gt;The 360 viewer can be embedded on any website using an iframe. The &lt;code&gt;src&lt;/code&gt; query parameter pre-loads a specific image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;iframe&lt;/span&gt;
  &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://kitmul.com/en/embed/image-design/interactive-360-photo-viewer?src=YOUR_IMAGE_URL"&lt;/span&gt;
  &lt;span class="na"&gt;width=&lt;/span&gt;&lt;span class="s"&gt;"100%"&lt;/span&gt;
  &lt;span class="na"&gt;height=&lt;/span&gt;&lt;span class="s"&gt;"500"&lt;/span&gt;
  &lt;span class="na"&gt;frameborder=&lt;/span&gt;&lt;span class="s"&gt;"0"&lt;/span&gt;
  &lt;span class="na"&gt;allowfullscreen&lt;/span&gt;
  &lt;span class="na"&gt;loading=&lt;/span&gt;&lt;span class="s"&gt;"lazy"&lt;/span&gt;
&lt;span class="nt"&gt;&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;YOUR_IMAGE_URL&lt;/code&gt; with the public URL of your equirectangular image. The image must be publicly accessible and serve appropriate &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="noopener noreferrer"&gt;CORS headers&lt;/a&gt;. For images on the same domain, use a relative path: &lt;code&gt;?src=/images/my-panorama.jpg&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations to keep in mind
&lt;/h2&gt;

&lt;p&gt;GPT Image 2 is not a 360 camera. The images are impressive but imperfect:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Limitation&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Seam artifacts&lt;/td&gt;
&lt;td&gt;About 30% of generations show a visible seam. Fix with follow-up prompts or &lt;a href="https://docs.gimp.org/en/gimp-filter-distort-shift.html" rel="noopener noreferrer"&gt;GIMP's offset filter&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geometric inconsistency&lt;/td&gt;
&lt;td&gt;Straight lines sometimes curve or warp, especially in architectural scenes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resolution ceiling&lt;/td&gt;
&lt;td&gt;Max ~2048x1024. Professional 360 cameras shoot at 5.7K+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No depth data&lt;/td&gt;
&lt;td&gt;AI panoramas are flat projections; you can rotate but not move through the space&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  AI-generated vs. professional 360 photos
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;GPT Image 2&lt;/th&gt;
&lt;th&gt;Professional 360 camera&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cost per image&lt;/td&gt;
&lt;td&gt;Free (ChatGPT subscription)&lt;/td&gt;
&lt;td&gt;$200-800+ per session&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time to produce&lt;/td&gt;
&lt;td&gt;30-60 seconds&lt;/td&gt;
&lt;td&gt;Hours (setup + shooting + stitching)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resolution&lt;/td&gt;
&lt;td&gt;~2048x1024&lt;/td&gt;
&lt;td&gt;5760x2880+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geometric accuracy&lt;/td&gt;
&lt;td&gt;Approximate&lt;/td&gt;
&lt;td&gt;Photogrammetric precision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Customization&lt;/td&gt;
&lt;td&gt;Unlimited; describe any scene&lt;/td&gt;
&lt;td&gt;Limited to physical locations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Seam quality&lt;/td&gt;
&lt;td&gt;~70% seamless&lt;/td&gt;
&lt;td&gt;Always seamless&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Concepts, prototypes, education&lt;/td&gt;
&lt;td&gt;Listings, documentation, VR&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Use cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real estate virtual staging&lt;/strong&gt;: generate 360 views of unfurnished spaces with different interior styles before the property is decorated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture concept presentations&lt;/strong&gt;: show clients what a planned interior will feel like before construction begins&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Educational content&lt;/strong&gt;: create 360 views of historical interiors for classroom use; Roman villas, medieval castles, Renaissance chapels&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Game and VR prototyping&lt;/strong&gt;: test spatial feel of environments before committing to full 3D modeling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E-commerce showrooms&lt;/strong&gt;: generate 360 product display rooms and embed the viewer on product pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open ChatGPT with GPT Image 2&lt;/li&gt;
&lt;li&gt;Use the prompt template with explicit equirectangular projection, 2:1 ratio, and seamless edges&lt;/li&gt;
&lt;li&gt;Download the image at full resolution&lt;/li&gt;
&lt;li&gt;Load it into the &lt;a href="https://kitmul.com/en/image-design/interactive-360-photo-viewer" rel="noopener noreferrer"&gt;Interactive 360 Photo Viewer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Drag to explore, zoom in, toggle auto-rotate, go fullscreen&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire workflow takes under two minutes and runs entirely in your browser.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Interactive 360 Photo Viewer is free, private, and processes everything locally. No signup, no server uploads. Part of the &lt;a href="https://dev.to/en/image-design"&gt;Image &amp;amp; Design Tools&lt;/a&gt; collection on Kitmul.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I Ran a Neural Network in a Browser Tab to Split a Song into Stems</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Fri, 24 Apr 2026 20:17:22 +0000</pubDate>
      <link>https://forem.com/aralroca/i-ran-a-neural-network-in-a-browser-tab-to-split-a-song-into-stems-10mk</link>
      <guid>https://forem.com/aralroca/i-ran-a-neural-network-in-a-browser-tab-to-split-a-song-into-stems-10mk</guid>
      <description>&lt;p&gt;Last week a friend sent me a voice memo. "I found this incredible bass line in an old soul track," he said, "but I can't isolate it without paying $30/month for some cloud service that wants my email, my credit card, and probably my firstborn."&lt;/p&gt;

&lt;p&gt;He's not wrong. The audio stem separation landscape in 2026 is a mess of subscription walls and cloud uploads. Most tools send your audio to a remote GPU, process it, and send back the stems. You get results in minutes, sure, but your unreleased remix idea now lives on someone else's server.&lt;/p&gt;

&lt;p&gt;I wanted to see if the entire pipeline could run locally, in a browser tab, with zero network requests after the initial page load.&lt;/p&gt;

&lt;p&gt;Turns out it can.&lt;/p&gt;

&lt;h2&gt;
  
  
  What stem separation actually is
&lt;/h2&gt;

&lt;p&gt;For those unfamiliar: &lt;a href="https://en.wikipedia.org/wiki/Source_separation" rel="noopener noreferrer"&gt;source separation&lt;/a&gt; (also called demixing or unmixing) is the process of decomposing a mixed audio signal into its constituent sources. A typical pop track is a sum of vocals, drums, bass, and everything else (guitars, synths, keys, strings). The AI's job is to reverse that sum.&lt;/p&gt;

&lt;p&gt;The state of the art traces back to Meta's &lt;a href="https://github.com/facebookresearch/demucs" rel="noopener noreferrer"&gt;Demucs&lt;/a&gt;, a hybrid model that operates in both time domain and frequency domain simultaneously. It was trained on thousands of multitrack recordings where the individual stems are known, so it learned the spectral fingerprints that distinguish a kick drum from a bass guitar from a human voice.&lt;/p&gt;

&lt;p&gt;The interesting bit is that Demucs v4 (htdemucs) uses a &lt;a href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)" rel="noopener noreferrer"&gt;transformer architecture&lt;/a&gt; fused with a convolutional U-Net. The transformer handles long-range dependencies (like a sustained vocal note over a drum fill), while the U-Net captures local spectral patterns. The result is significantly less "bleeding" between stems compared to older approaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running it in the browser with ONNX + WebAssembly
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/music/audio-stem-splitter" rel="noopener noreferrer"&gt;Audio Stem Splitter&lt;/a&gt; on Kitmul loads an ONNX-exported version of the Demucs model and runs inference entirely via &lt;a href="https://onnxruntime.ai/" rel="noopener noreferrer"&gt;ONNX Runtime Web&lt;/a&gt; backed by WebAssembly. No server. No upload. The audio bytes never leave your machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5321d99f85ci387blukr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5321d99f85ci387blukr.webp" alt="The Kitmul Audio Stem Splitter interface showing the upload area and generated stems panel" width="800" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what happens when you drop an audio file:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The file is decoded to raw PCM using the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API" rel="noopener noreferrer"&gt;Web Audio API&lt;/a&gt;'s &lt;code&gt;decodeAudioData&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If the sample rate isn't 44100 Hz, it gets resampled via an &lt;code&gt;OfflineAudioContext&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The audio is chunked and fed through the ONNX model in a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API" rel="noopener noreferrer"&gt;Web Worker&lt;/a&gt; to avoid blocking the UI thread&lt;/li&gt;
&lt;li&gt;The model outputs four spectral masks (vocals, drums, bass, other)&lt;/li&gt;
&lt;li&gt;Each mask is applied to the original spectrogram to produce isolated stems&lt;/li&gt;
&lt;li&gt;The stems are encoded back to WAV for download&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole pipeline is embarrassingly parallel in theory, but in practice you're bounded by the single WASM thread and available RAM. A 4-minute song takes roughly 3-5 minutes on a modern laptop. Not fast, but not bad for running a neural network in a browser tab.&lt;/p&gt;

&lt;h2&gt;
  
  
  The privacy argument nobody is making
&lt;/h2&gt;

&lt;p&gt;Every time you upload a track to LALAL.AI, Moises, or Stem Roller, you're sending potentially copyrighted audio (or your own unreleased work) to a third-party server. Their privacy policies usually say they "don't store your files permanently," but the operative word is "permanently."&lt;/p&gt;

&lt;p&gt;With client-side processing, the question of data retention is moot. There's nothing to retain. Your browser downloads the model weights once (cached for future visits), runs the math locally, and produces output files that exist only in your device's memory until you explicitly save them.&lt;/p&gt;

&lt;p&gt;This matters especially for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Producers&lt;/strong&gt; working with unreleased material&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DJs&lt;/strong&gt; preparing sets with copyrighted tracks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Music teachers&lt;/strong&gt; creating practice tracks for students&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forensic audio analysts&lt;/strong&gt; working with sensitive recordings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flff6fig6loj64hgi2s9y.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flff6fig6loj64hgi2s9y.webp" alt="A music studio with instruments and warm ambient lighting" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical use cases I didn't expect
&lt;/h2&gt;

&lt;p&gt;The obvious use case is karaoke (remove vocals, sing along). But I've seen people use stem separation for things I hadn't considered:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transcription aid.&lt;/strong&gt; A jazz pianist told me she splits out the piano stem from classic recordings to transcribe voicings more accurately. When you can hear the piano in isolation, you catch harmonic details that get buried in the full mix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample archaeology.&lt;/strong&gt; Hip-hop producers dig through vinyl rips looking for loops. Isolating the drum break from a 1970s funk track gives you a clean sample without having to EQ out the horns by hand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility.&lt;/strong&gt; Someone who is hard of hearing mentioned that boosting the vocal stem and attenuating the instrumental makes dialogue-heavy content (podcasts with music beds, film scenes) significantly clearer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A/B testing mixes.&lt;/strong&gt; If you're learning to mix, splitting a professional track into stems lets you rebuild the mix from scratch in your DAW and compare your choices against the original balance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The model's limitations (honest take)
&lt;/h2&gt;

&lt;p&gt;The separation isn't perfect. Here's where the model struggles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Heavily compressed or low-bitrate audio&lt;/strong&gt; produces more artifacts. Start with 320kbps MP3 or WAV if you can.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dense arrangements&lt;/strong&gt; with many layered instruments bleed more into the "other" stem. A solo guitar-and-voice track separates beautifully; a wall-of-sound Phil Spector production, not so much.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mono recordings&lt;/strong&gt; lose the spatial cues that help the model distinguish sources. Stereo is always better.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Very long files&lt;/strong&gt; (&amp;gt;10 minutes) will challenge your device's RAM. The 50MB file size limit is there for a reason.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you need studio-grade results for a commercial release, you probably want &lt;a href="https://www.izotope.com/en/rx.html" rel="noopener noreferrer"&gt;iZotope RX&lt;/a&gt; or the full Demucs CLI on a GPU. But for quick workflows, creative exploration, or situations where privacy matters more than perfection, browser-based separation is genuinely useful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyca64jc7fvl1ujbpcg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyca64jc7fvl1ujbpcg.webp" alt="Musical score and waveform visualization concept" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it compares to the competition
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Kitmul Stem Splitter&lt;/th&gt;
&lt;th&gt;LALAL.AI&lt;/th&gt;
&lt;th&gt;Moises&lt;/th&gt;
&lt;th&gt;Demucs CLI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Processing&lt;/td&gt;
&lt;td&gt;100% local (browser)&lt;/td&gt;
&lt;td&gt;Cloud GPU&lt;/td&gt;
&lt;td&gt;Cloud GPU&lt;/td&gt;
&lt;td&gt;Local GPU/CPU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$15-30/mo&lt;/td&gt;
&lt;td&gt;$4-17/mo&lt;/td&gt;
&lt;td&gt;Free (OSS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privacy&lt;/td&gt;
&lt;td&gt;No upload&lt;/td&gt;
&lt;td&gt;Upload required&lt;/td&gt;
&lt;td&gt;Upload required&lt;/td&gt;
&lt;td&gt;No upload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup&lt;/td&gt;
&lt;td&gt;Zero&lt;/td&gt;
&lt;td&gt;Account + payment&lt;/td&gt;
&lt;td&gt;Account + payment&lt;/td&gt;
&lt;td&gt;Python + ffmpeg&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quality&lt;/td&gt;
&lt;td&gt;Good (ONNX htdemucs)&lt;/td&gt;
&lt;td&gt;Very good&lt;/td&gt;
&lt;td&gt;Very good&lt;/td&gt;
&lt;td&gt;Best (full model)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;3-5 min/song&lt;/td&gt;
&lt;td&gt;~30 sec&lt;/td&gt;
&lt;td&gt;~1 min&lt;/td&gt;
&lt;td&gt;~30 sec (GPU)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The tradeoff is clear: you sacrifice some speed and marginal quality for zero setup, zero cost, and complete privacy. For most non-professional workflows, that's the right call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Web Audio API is more capable than you think
&lt;/h2&gt;

&lt;p&gt;Building this reinforced something I keep discovering: the browser audio stack is &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API" rel="noopener noreferrer"&gt;seriously underrated&lt;/a&gt;. Between &lt;code&gt;AudioContext&lt;/code&gt; for real-time processing, &lt;code&gt;OfflineAudioContext&lt;/code&gt; for offline rendering, &lt;code&gt;AudioWorklet&lt;/code&gt; for custom DSP on a dedicated thread, and now ONNX Runtime Web for running neural networks, you can build legitimate audio production tools that would have required native apps five years ago.&lt;/p&gt;

&lt;p&gt;If you're a developer interested in this space, the combination of &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers" rel="noopener noreferrer"&gt;Web Workers&lt;/a&gt; for heavy computation + &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer" rel="noopener noreferrer"&gt;SharedArrayBuffer&lt;/a&gt; for zero-copy data transfer + WASM for near-native math performance is the stack to bet on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/music/audio-stem-splitter" rel="noopener noreferrer"&gt;Audio Stem Splitter&lt;/a&gt; is free, works in any modern browser, and processes everything locally. Drop an MP3 or WAV, wait a few minutes, and download your isolated vocals, drums, bass, and instrumental tracks.&lt;/p&gt;

&lt;p&gt;If you're into music production, the &lt;a href="https://kitmul.com/en/music/loop-music-creator" rel="noopener noreferrer"&gt;Loop Music Creator&lt;/a&gt; (browser-based DAW) and the &lt;a href="https://kitmul.com/en/music/youtube-loop-mix" rel="noopener noreferrer"&gt;YouTube Loop Mix&lt;/a&gt; (dual-deck DJ tool) pair well with separated stems for remixing workflows.&lt;/p&gt;

&lt;p&gt;All three tools run in your browser. No accounts. No uploads. No subscriptions.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>machinelearning</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>One PR to a parser unlocked prerendering in Brisa</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Thu, 23 Apr 2026 20:30:35 +0000</pubDate>
      <link>https://forem.com/aralroca/one-pr-to-a-parser-unlocked-prerendering-in-brisa-ijo</link>
      <guid>https://forem.com/aralroca/one-pr-to-a-parser-unlocked-prerendering-in-brisa-ijo</guid>
      <description>&lt;p&gt;I built a JavaScript framework called &lt;a href="https://brisa.build" rel="noopener noreferrer"&gt;Brisa&lt;/a&gt;. The kind of framework that needs to parse every single source file your app contains; analyze imports, detect server vs. client components, inject macros, transform JSX. All of that happens at the AST level.&lt;/p&gt;

&lt;p&gt;Before Brisa, I was already maintaining &lt;a href="https://github.com/aralroca/next-translate" rel="noopener noreferrer"&gt;next-translate&lt;/a&gt;, an i18n library for Next.js. For the plugin that auto-injects locale loaders into pages, I used the TypeScript compiler API. It worked. It was also painfully slow; &lt;code&gt;ts.createProgram()&lt;/code&gt; for every page file at build time, full type-checker instantiation, lib resolution. We had to add &lt;code&gt;noResolve: true&lt;/code&gt; and &lt;code&gt;noLib: true&lt;/code&gt; just to make it bearable. The parser was doing ten times more work than we needed because all we wanted was the AST, not the types.&lt;/p&gt;

&lt;p&gt;When I started building Brisa, I knew I needed something faster. Something that gave me an ESTree-compliant AST without the overhead of a full compiler. That's how I found &lt;a href="https://github.com/meriyah/meriyah" rel="noopener noreferrer"&gt;Meriyah&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I chose Meriyah over everything else
&lt;/h2&gt;

&lt;p&gt;Meriyah is written entirely in JavaScript. No native bindings. No WASM loading step. No compilation step. Just &lt;code&gt;parseScript(code, { jsx: true, module: true, next: true })&lt;/code&gt; and you get back an &lt;a href="https://github.com/estree/estree" rel="noopener noreferrer"&gt;ESTree&lt;/a&gt; AST in microseconds.&lt;/p&gt;

&lt;p&gt;For Brisa's build pipeline, that speed difference compounds. Every source file in a Brisa project passes through Meriyah. The parser runs inside &lt;code&gt;AST().parseCodeToAST()&lt;/code&gt;, which first transpiles via Bun's transpiler and then feeds the result to Meriyah. The output is a standard ESTree &lt;code&gt;Program&lt;/code&gt; node that I can traverse, modify, and regenerate with &lt;a href="https://github.com/davidbonnet/astring" rel="noopener noreferrer"&gt;astring&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But here's where it got interesting. Brisa has a feature called &lt;a href="https://brisa.build/api-reference/extended-props/renderOn" rel="noopener noreferrer"&gt;&lt;code&gt;renderOn&lt;/code&gt;&lt;/a&gt; that lets you prerender components at build time. You write this in your page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SomeComponent&lt;/span&gt; &lt;span class="na"&gt;renderOn&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"build"&lt;/span&gt; &lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"bar"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And at build time, the AST transform detects &lt;code&gt;renderOn="build"&lt;/code&gt;, replaces the JSX with a &lt;code&gt;__prerender__macro()&lt;/code&gt; call, and injects this import at the top of the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;__prerender__macro&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;brisa/macros&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;macro&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;with { type: 'macro' }&lt;/code&gt; is an &lt;a href="https://github.com/tc39/proposal-import-attributes" rel="noopener noreferrer"&gt;import attribute&lt;/a&gt; that tells &lt;a href="https://bun.sh/docs/bundler/macros" rel="noopener noreferrer"&gt;Bun's bundler&lt;/a&gt; to resolve the import at compile time. The component gets rendered during the build, and the result is injected as static HTML. The user writes &lt;code&gt;renderOn="build"&lt;/code&gt;, but under the hood the framework constructs &lt;code&gt;ImportDeclaration&lt;/code&gt; and &lt;code&gt;ImportAttribute&lt;/code&gt; AST nodes by hand and regenerates the code.&lt;/p&gt;

&lt;p&gt;The problem: Meriyah didn't support &lt;a href="https://github.com/meriyah/meriyah/pull/280" rel="noopener noreferrer"&gt;import attributes&lt;/a&gt; when I started using it. So I contributed a PR to add the feature. That PR landed, and Brisa's entire prerender pipeline could work end to end.&lt;/p&gt;

&lt;p&gt;Going from "the parser can't handle my syntax" to "I'll fix the parser itself" is the kind of thing that only happens when you deeply understand how ASTs work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The inspiration
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://astexplorer.net/" rel="noopener noreferrer"&gt;AST Explorer&lt;/a&gt; exists, and it's great. I use it regularly. It's the reference tool for exploring ASTs. I wanted to build something similar as part of &lt;a href="https://kitmul.com" rel="noopener noreferrer"&gt;Kitmul&lt;/a&gt;; my own version of an AST visualizer with parser selection, interactive tree view, and support for the parsers I use daily.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/visualizers-logic/ast-visualizer" rel="noopener noreferrer"&gt;AST Visualizer&lt;/a&gt; does exactly this. Paste JavaScript, pick your parser (Acorn, Meriyah, or SWC), and get an interactive tree or raw JSON. Everything runs locally in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycfdiuezaeen04u7i9dn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycfdiuezaeen04u7i9dn.webp" alt="The AST Visualizer showing a JavaScript function parsed with Acorn into an interactive tree view; Monaco editor on the left, collapsible AST nodes on the right, with parser and view mode selectors in the toolbar" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The parser choice matters because each one produces a slightly different AST:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/acornjs/acorn" rel="noopener noreferrer"&gt;Acorn&lt;/a&gt;&lt;/strong&gt; follows the ESTree spec strictly. It's the parser that &lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; uses internally. If you're writing ESLint rules, this is the tree your rule will traverse.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/meriyah/meriyah" rel="noopener noreferrer"&gt;Meriyah&lt;/a&gt;&lt;/strong&gt; also follows ESTree, but adds JSX support and bleeding-edge features via the &lt;code&gt;next: true&lt;/code&gt; flag. It's the parser I chose for Brisa because it's fast, lightweight, and written in pure JS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://swc.rs/" rel="noopener noreferrer"&gt;SWC&lt;/a&gt;&lt;/strong&gt; is a Rust-based compiler that runs via WASM in the browser. Its AST uses a different structure; &lt;code&gt;Module&lt;/code&gt; instead of &lt;code&gt;Program&lt;/code&gt;, &lt;code&gt;span&lt;/code&gt; objects instead of &lt;code&gt;start&lt;/code&gt;/&lt;code&gt;end&lt;/code&gt; positions. If you're working with Next.js or Turbopack internals, this is the AST you're dealing with.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Switching between parsers and seeing how the same code produces different trees is one of the fastest ways to understand parser differences. Try parsing &lt;code&gt;const x = 42;&lt;/code&gt; with all three and compare the node types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three things the tree teaches you that docs don't
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlpqylo98bvk6wxefz2j.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlpqylo98bvk6wxefz2j.webp" alt="next-translate bundle size comparison; the i18n library for Next.js where I first dealt with AST parsing via the TypeScript compiler API" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Expressions vs. statements are visible.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every JavaScript developer hears "expression vs. statement" at some point. Few can articulate the difference until they see it in a tree. Consider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AST shows an &lt;code&gt;ExpressionStatement&lt;/code&gt; wrapping an &lt;code&gt;AssignmentExpression&lt;/code&gt;. The expression is the &lt;code&gt;x = 5&lt;/code&gt; part. The statement is the semicolon-terminated wrapper that makes it a standalone line. This distinction is why &lt;code&gt;if (x = 5)&lt;/code&gt; is legal JavaScript; the assignment is an expression, and expressions are valid inside conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Operator precedence becomes structural.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Parse &lt;code&gt;2 + 3 * 4&lt;/code&gt; and you'll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BinaryExpression (operator: "+")
  ├─ left: Literal (2)
  └─ right: BinaryExpression (operator: "*")
           ├─ left: Literal (3)
           └─ right: Literal (4)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The multiplication is &lt;em&gt;nested inside&lt;/em&gt; the addition's right operand. That's not a formatting choice; that's the parser encoding precedence into structure. The deeper node evaluates first. Parentheses change the tree structure, not some invisible priority flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Import attributes reveal how &lt;code&gt;renderOn="build"&lt;/code&gt; works.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Parse this with Meriyah:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;__prerender__macro&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;brisa/macros&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;macro&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ImportDeclaration&lt;/code&gt; node gets an &lt;code&gt;attributes&lt;/code&gt; array containing &lt;code&gt;ImportAttribute&lt;/code&gt; nodes. Each attribute has a &lt;code&gt;key&lt;/code&gt; and a &lt;code&gt;value&lt;/code&gt;, both &lt;code&gt;Literal&lt;/code&gt; nodes. This is the import that Brisa's build pipeline injects when it finds &lt;code&gt;renderOn="build"&lt;/code&gt; on a component. The &lt;code&gt;with { type: 'macro' }&lt;/code&gt; tells Bun to resolve the function at compile time. Without seeing the tree, you'd never guess that &lt;code&gt;with { type: 'macro' }&lt;/code&gt; becomes a nested array of attribute objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real use cases from building frameworks
&lt;/h2&gt;

&lt;p&gt;I keep hearing "ASTs are for compiler people." No. ASTs are for anyone who writes tools that operate on code. Here's where I've actually used AST knowledge:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework build pipelines.&lt;/strong&gt; In &lt;a href="https://brisa.build" rel="noopener noreferrer"&gt;Brisa&lt;/a&gt;, every source file is parsed to an AST, analyzed for imports, transformed (macro injection, server/client separation, i18n processing), and regenerated as code. The central function is &lt;code&gt;AST('tsx').parseCodeToAST(code)&lt;/code&gt;, which returns an ESTree &lt;code&gt;Program&lt;/code&gt; node. Without understanding the tree, I couldn't write a single one of those transforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerender macro injection via &lt;code&gt;renderOn="build"&lt;/code&gt;.&lt;/strong&gt; When Brisa encounters &lt;code&gt;&amp;lt;Foo renderOn="build" /&amp;gt;&lt;/code&gt;, the AST transform constructs &lt;code&gt;ImportAttribute&lt;/code&gt; nodes by hand to inject &lt;code&gt;import {__prerender__macro} from 'brisa/macros' with { type: "macro" }&lt;/code&gt;. There's a quirk: Meriyah uses &lt;code&gt;value&lt;/code&gt; on Literal nodes where astring expects &lt;code&gt;name&lt;/code&gt;. That's an actual comment in the &lt;a href="https://github.com/brisa-build/brisa" rel="noopener noreferrer"&gt;Brisa source code&lt;/a&gt;: &lt;code&gt;// This astring is looking for "name", but meriyah "value"&lt;/code&gt;. You only discover that kind of thing by staring at trees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;i18n loader injection.&lt;/strong&gt; In &lt;a href="https://github.com/aralroca/next-translate-plugin" rel="noopener noreferrer"&gt;next-translate-plugin&lt;/a&gt;, the Webpack loader uses &lt;code&gt;ts.createProgram()&lt;/code&gt; to parse each page and detect its exports. It needs to know whether the page has &lt;code&gt;getStaticProps&lt;/code&gt;, &lt;code&gt;getServerSideProps&lt;/code&gt;, or a default export, so it can inject the right locale loader. The TypeScript AST uses &lt;code&gt;SyntaxKind&lt;/code&gt; enums instead of string-based types, which is a different mental model from ESTree. Seeing both trees side by side clarifies the difference instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Import path resolution.&lt;/strong&gt; Brisa resolves relative imports to absolute paths at build time. The transform walks &lt;code&gt;ImportDeclaration&lt;/code&gt; nodes, reads the &lt;code&gt;source.value&lt;/code&gt; string, resolves it against the file system, and replaces it. This is 30 lines of code once you understand that &lt;code&gt;ImportDeclaration.source&lt;/code&gt; is a &lt;code&gt;Literal&lt;/code&gt; node with a &lt;code&gt;value&lt;/code&gt; property.&lt;/p&gt;

&lt;h2&gt;
  
  
  The search feature saves more time than you'd expect
&lt;/h2&gt;

&lt;p&gt;The visualizer includes a search bar that filters nodes by type, name, or value. Type "Identifier" and every identifier in the tree highlights. Type "import" and you find every import-related node instantly.&lt;/p&gt;

&lt;p&gt;This sounds trivial until you're debugging a framework transform and you need to find every &lt;code&gt;ImportDeclaration&lt;/code&gt; in a 200-line file. Scrolling through an expanded tree is slow. Searching for &lt;code&gt;ImportDeclaration&lt;/code&gt; and seeing exactly where they sit in the hierarchy is fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing parsers side by side
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Acorn&lt;/th&gt;
&lt;th&gt;Meriyah&lt;/th&gt;
&lt;th&gt;SWC&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;JavaScript&lt;/td&gt;
&lt;td&gt;JavaScript&lt;/td&gt;
&lt;td&gt;Rust (WASM in browser)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Spec&lt;/td&gt;
&lt;td&gt;ESTree&lt;/td&gt;
&lt;td&gt;ESTree&lt;/td&gt;
&lt;td&gt;SWC AST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSX support&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Import attributes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Very fast&lt;/td&gt;
&lt;td&gt;Fast (after WASM load)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bundle size&lt;/td&gt;
&lt;td&gt;~120KB&lt;/td&gt;
&lt;td&gt;~320KB&lt;/td&gt;
&lt;td&gt;~14MB (WASM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Used by&lt;/td&gt;
&lt;td&gt;ESLint&lt;/td&gt;
&lt;td&gt;Brisa&lt;/td&gt;
&lt;td&gt;Next.js, Turbopack&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The point isn't that one parser is better. Each one has trade-offs. Acorn is the standard. Meriyah is the fast, feature-rich option. SWC is the heavyweight that handles everything but requires loading 14MB of WASM. The &lt;a href="https://kitmul.com/en/visualizers-logic/ast-visualizer" rel="noopener noreferrer"&gt;AST Visualizer&lt;/a&gt; lets you switch between all three and see the differences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycfdiuezaeen04u7i9dn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycfdiuezaeen04u7i9dn.webp" alt="The AST Visualizer tool showing parser selection between Acorn, Meriyah, and SWC with tree and JSON view modes" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Five code snippets worth exploring
&lt;/h2&gt;

&lt;p&gt;Paste these into the &lt;a href="https://kitmul.com/en/visualizers-logic/ast-visualizer" rel="noopener noreferrer"&gt;AST Visualizer&lt;/a&gt; and try each parser:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Arrow function with implicit return:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how the &lt;code&gt;ArrowFunctionExpression&lt;/code&gt; has &lt;code&gt;expression: true&lt;/code&gt; and the body is a &lt;code&gt;BinaryExpression&lt;/code&gt;, not a &lt;code&gt;BlockStatement&lt;/code&gt;. That boolean flag is how tools distinguish &lt;code&gt;=&amp;gt; x&lt;/code&gt; from &lt;code&gt;=&amp;gt; { return x; }&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Import attributes (use Meriyah or SWC):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;__prerender__macro&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;brisa/macros&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;macro&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Meriyah, the &lt;code&gt;ImportDeclaration&lt;/code&gt; gets an &lt;code&gt;attributes&lt;/code&gt; array with &lt;code&gt;ImportAttribute&lt;/code&gt; nodes. With Acorn, this syntax will throw a parse error. That's exactly the kind of parser difference that matters in practice; Brisa's build pipeline depends on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Optional chaining:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;nested&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;deep&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;property&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each &lt;code&gt;?.&lt;/code&gt; creates a &lt;code&gt;ChainExpression&lt;/code&gt; wrapping &lt;code&gt;MemberExpression&lt;/code&gt; nodes with &lt;code&gt;optional: true&lt;/code&gt;. The chain itself is a single node, not nested optionals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Async/await:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;fetchData&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;FunctionDeclaration&lt;/code&gt; has &lt;code&gt;async: true&lt;/code&gt;. The &lt;code&gt;await&lt;/code&gt; keyword creates an &lt;code&gt;AwaitExpression&lt;/code&gt; wrapping the &lt;code&gt;CallExpression&lt;/code&gt;. This is why you can't use &lt;code&gt;await&lt;/code&gt; outside an async function; the parser enforces the nesting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Destructuring with defaults:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generates a deeply nested tree. The defaults create &lt;code&gt;AssignmentPattern&lt;/code&gt; nodes. The nested destructuring puts an &lt;code&gt;ObjectPattern&lt;/code&gt; inside a &lt;code&gt;Property&lt;/code&gt; value. This is the kind of structure where a tree view is worth a thousand words.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy
&lt;/h2&gt;

&lt;p&gt;All three parsers run entirely in your browser. Acorn and Meriyah are JavaScript libraries that execute client-side. SWC loads a WASM binary from a local file. No code is transmitted to any server. No analytics track what you paste. If you're parsing proprietary source code, nothing leaves your device.&lt;/p&gt;

&lt;p&gt;If you're working with code that needs other kinds of analysis, the &lt;a href="https://dev.to/en/visualizers-logic"&gt;Visualizers &amp;amp; Logic Tools collection&lt;/a&gt; includes graph visualizers, truth table generators, and regex tools that pair well with AST work. For tracking your learning sessions, the &lt;a href="https://dev.to/en/agile-project-management/pomodoro-agile"&gt;Pomodoro Timer&lt;/a&gt; with built-in focus music is surprisingly effective for problem sets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual takeaway
&lt;/h2&gt;

&lt;p&gt;ASTs aren't magic. They're trees. Every piece of code you've ever written has a tree representation that a parser produces in milliseconds. The gap between "I've heard of ASTs" and "I can build a framework's compiler pipeline" is mostly about seeing enough trees that the patterns become obvious.&lt;/p&gt;

&lt;p&gt;I went from struggling with the TypeScript compiler API in next-translate to contributing parser features to Meriyah for Brisa. The turning point wasn't reading more documentation. It was seeing enough ASTs that the node types became second nature.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/visualizers-logic/ast-visualizer" rel="noopener noreferrer"&gt;AST Visualizer&lt;/a&gt; won't teach you compiler theory. It'll teach you what the parser sees when it reads your code. For writing framework internals, build tools, codemods, and ESLint rules, that's the only thing that matters.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The AST Visualizer is free, private, and runs entirely in your browser. No signup, no install, no data leaves your device. Part of the &lt;a href="https://dev.to/en/visualizers-logic"&gt;Visualizers &amp;amp; Logic Tools&lt;/a&gt; collection on Kitmul.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>opensource</category>
      <category>performance</category>
    </item>
    <item>
      <title>I Stopped Installing Qiskit to Understand Hadamard Gates</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Wed, 22 Apr 2026 18:53:44 +0000</pubDate>
      <link>https://forem.com/aralroca/i-stopped-installing-qiskit-to-understand-hadamard-gates-1m8n</link>
      <guid>https://forem.com/aralroca/i-stopped-installing-qiskit-to-understand-hadamard-gates-1m8n</guid>
      <description>&lt;p&gt;I spent six years working adjacent to quantum computing research. Not building qubits; building the classical software that talks to the hardware. The thing that surprised me most wasn't the physics. It was how bad the tooling was for anyone trying to &lt;em&gt;learn&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The education gap is the real bottleneck
&lt;/h2&gt;

&lt;p&gt;The hardware is advancing. &lt;a href="https://research.ibm.com/blog/next-wave-quantum-centric-supercomputing" rel="noopener noreferrer"&gt;IBM's 1,121-qubit Condor processor&lt;/a&gt; exists. Google's Willow chip hit a &lt;a href="https://blog.google/technology/research/google-willow-quantum-chip/" rel="noopener noreferrer"&gt;below-threshold error correction milestone&lt;/a&gt; in late 2024. But ask a CS undergrad to explain what a Hadamard gate actually does to a qubit's state vector, and you'll get a blank stare followed by a memorized sentence from a textbook.&lt;/p&gt;

&lt;p&gt;The problem isn't intelligence. The problem is that every learning path for quantum circuits funnels you into one of two dead ends:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Pen-and-paper linear algebra.&lt;/strong&gt; You compute tensor products by hand. You multiply 8x8 matrices. By the time you've verified that your CNOT gate works correctly, you've spent 40 minutes and lost all intuition for what the circuit &lt;em&gt;does&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Full SDK installations.&lt;/strong&gt; &lt;a href="https://qiskit.org/" rel="noopener noreferrer"&gt;Qiskit&lt;/a&gt;, &lt;a href="https://quantumai.google/cirq" rel="noopener noreferrer"&gt;Cirq&lt;/a&gt;, &lt;a href="https://pennylane.ai/" rel="noopener noreferrer"&gt;PennyLane&lt;/a&gt;. These are serious tools for serious work. They're also 200MB+ installs with Python dependency chains, Jupyter notebooks, and a learning curve that assumes you already understand what you're trying to learn. That's backwards.&lt;/p&gt;

&lt;p&gt;There's a gap between "read the textbook" and "install Qiskit." That gap is where most people give up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually wanted
&lt;/h2&gt;

&lt;p&gt;A textarea where I type &lt;code&gt;H 0&lt;/code&gt; and immediately see the state vector change. No install. No signup. No notebook server. Just a browser tab. And if I don't remember the syntax, a row of clickable buttons that writes the gate for me.&lt;/p&gt;

&lt;p&gt;That's what the &lt;a href="https://kitmul.com/en/visualizers-logic/quantum-circuit-simulator" rel="noopener noreferrer"&gt;Quantum Circuit Simulator&lt;/a&gt; is. Up to 16 qubits, nine gates (H, X, Y, Z, S, T, CX, SWAP, CCX), a gate toolbar for quick insertion, real-time probability bars, complex amplitudes, and an interactive 3D Bloch sphere that shows each qubit's state on the unit sphere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthj3nayzplwzkjzedsq3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthj3nayzplwzkjzedsq3.webp" alt="The simulator showing a Bell State circuit with clickable gate toolbar, probability bars, amplitudes, and a 3D Bloch sphere where both entangled qubits sit at the origin" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The syntax is one gate per line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
CX 0 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a &lt;a href="https://en.wikipedia.org/wiki/Bell_state" rel="noopener noreferrer"&gt;Bell State&lt;/a&gt;. Two lines. The output shows &lt;code&gt;|00&amp;gt;&lt;/code&gt; at 50.00% and &lt;code&gt;|11&amp;gt;&lt;/code&gt; at 50.00%, with amplitudes of 0.707 + 0.000i each. If you've read Nielsen &amp;amp; Chuang, you recognize this as (1/sqrt(2))(|00&amp;gt; + |11&amp;gt;); the maximally entangled two-qubit state that Einstein called "spooky action at a distance."&lt;/p&gt;

&lt;p&gt;You didn't need to install anything to see that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why browser-based matters for quantum education
&lt;/h2&gt;

&lt;p&gt;There's a pedagogical argument here that goes beyond convenience. When the feedback loop between "write circuit" and "see result" drops to zero seconds, something changes in how you learn.&lt;/p&gt;

&lt;p&gt;You start experimenting. You add a Z gate after the Hadamard and watch the probabilities shift. You swap the control and target qubits on the CNOT and see what breaks. You build a &lt;a href="https://en.wikipedia.org/wiki/Greenberger%E2%80%93Horne%E2%80%93Zeilinger_state" rel="noopener noreferrer"&gt;GHZ state&lt;/a&gt; with three qubits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
CX 0 1
CX 1 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you see &lt;code&gt;|000&amp;gt;&lt;/code&gt; at 50% and &lt;code&gt;|111&amp;gt;&lt;/code&gt; at 50%; all three qubits entangled, no intermediate states. The 0.707 amplitude on both basis states confirms the math. You didn't need to set up a virtual environment to get there.&lt;/p&gt;

&lt;p&gt;This is how people actually learn. Not by reading about superposition in a PDF; by breaking circuits and watching what happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The technical implementation
&lt;/h2&gt;

&lt;p&gt;The simulator runs entirely in JavaScript. No backend. No WebAssembly. No WASM-compiled Qiskit. Just matrix multiplication on complex numbers in the browser.&lt;/p&gt;

&lt;p&gt;The state vector starts as &lt;code&gt;|000...0&amp;gt;&lt;/code&gt; (all qubits in the &lt;code&gt;|0&amp;gt;&lt;/code&gt; state). Each gate applies a unitary transformation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hadamard (H)&lt;/strong&gt;: Creates superposition. Transforms &lt;code&gt;|0&amp;gt;&lt;/code&gt; into (|0&amp;gt; + |1&amp;gt;)/sqrt(2).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pauli-X&lt;/strong&gt;: Quantum NOT gate. Flips &lt;code&gt;|0&amp;gt;&lt;/code&gt; to &lt;code&gt;|1&amp;gt;&lt;/code&gt; and vice versa.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pauli-Y&lt;/strong&gt;: Rotation around the Y-axis with a phase factor of &lt;em&gt;i&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pauli-Z&lt;/strong&gt;: Phase flip. Leaves &lt;code&gt;|0&amp;gt;&lt;/code&gt; unchanged but maps &lt;code&gt;|1&amp;gt;&lt;/code&gt; to -&lt;code&gt;|1&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CX (CNOT)&lt;/strong&gt;: Controlled-NOT. Flips the target qubit if and only if the control qubit is &lt;code&gt;|1&amp;gt;&lt;/code&gt;. This is the gate that creates entanglement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase (S)&lt;/strong&gt;: Rotates by π/2 around the Z-axis. Maps |1⟩ to i|1⟩. The square root of Z.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;T (π/8 gate)&lt;/strong&gt;: Rotates by π/4 around the Z-axis. Key building block for universal quantum computation. The square root of S.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SWAP&lt;/strong&gt;: Exchanges the states of two qubits. Syntax: &lt;code&gt;SWAP q1 q2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CCX (Toffoli)&lt;/strong&gt;: Controlled-controlled-NOT. Flips the target qubit only when both control qubits are |1⟩. Universal for classical reversible computation. Syntax: &lt;code&gt;CCX c1 c2 target&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a 3-qubit system, the state vector has 2^3 = 8 complex amplitudes. The simulator tracks all of them and displays both the raw amplitudes and the measurement probabilities (|amplitude|^2) in real time.&lt;/p&gt;

&lt;p&gt;The circuit state is encoded into the URL. Every change you make; every gate you add, every qubit count adjustment; updates the URL parameters automatically. Copy the URL, send it to a colleague, and they see your exact circuit. No accounts. No save buttons. No cloud storage. The URL &lt;em&gt;is&lt;/em&gt; the save file.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gate toolbar and Bloch sphere
&lt;/h2&gt;

&lt;p&gt;Two things kept bugging me about text-only circuit input. First, beginners don't know the syntax. Second, numbers alone don't build geometric intuition about qubit states.&lt;/p&gt;

&lt;p&gt;The gate toolbar sits above the textarea: nine buttons grouped by type (single-qubit: H, X, Y, Z, S, T; two-qubit: CX, SWAP; three-qubit: CCX). Click a button and the gate line gets appended to your circuit. You can still type manually if you prefer, but the buttons remove the "what was the syntax for a Toffoli gate again?" friction.&lt;/p&gt;

&lt;p&gt;The Bloch sphere renders below the results panel. It computes the reduced density matrix for each qubit via partial trace, then extracts the Bloch coordinates (x = 2Re(rho01), y = 2Im(rho01), z = rho00 - rho11). For a single qubit in &lt;code&gt;|0&amp;gt;&lt;/code&gt;, the vector points to the north pole. Apply a Hadamard and it moves to the equator. Create a Bell state and both vectors collapse to the origin, because the reduced state of each qubit in a maximally entangled pair is the maximally mixed state. You can drag the sphere to rotate it, which helps when multiple qubit vectors overlap.&lt;/p&gt;

&lt;p&gt;For circuits with 1 to 4 qubits, all vectors show on the same sphere with different colors. Beyond 4 qubits, a dropdown lets you pick which qubit to visualize.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing to the alternatives
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Kitmul Simulator&lt;/th&gt;
&lt;th&gt;Qiskit&lt;/th&gt;
&lt;th&gt;Cirq&lt;/th&gt;
&lt;th&gt;IBM Quantum Composer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Setup time&lt;/td&gt;
&lt;td&gt;0 seconds&lt;/td&gt;
&lt;td&gt;10-30 min&lt;/td&gt;
&lt;td&gt;10-30 min&lt;/td&gt;
&lt;td&gt;Account required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Install size&lt;/td&gt;
&lt;td&gt;0 MB&lt;/td&gt;
&lt;td&gt;~500 MB&lt;/td&gt;
&lt;td&gt;~300 MB&lt;/td&gt;
&lt;td&gt;Cloud-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max qubits&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;30+&lt;/td&gt;
&lt;td&gt;30+&lt;/td&gt;
&lt;td&gt;127 (hardware)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sharing&lt;/td&gt;
&lt;td&gt;URL copy&lt;/td&gt;
&lt;td&gt;Export notebook&lt;/td&gt;
&lt;td&gt;Export script&lt;/td&gt;
&lt;td&gt;Account link&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privacy&lt;/td&gt;
&lt;td&gt;100% local&lt;/td&gt;
&lt;td&gt;Local&lt;/td&gt;
&lt;td&gt;Local&lt;/td&gt;
&lt;td&gt;IBM cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Free tier limited&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The trade-off is obvious. The Kitmul simulator handles up to 16 qubits. Qiskit handles 30. If you're implementing &lt;a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm" rel="noopener noreferrer"&gt;Shor's algorithm&lt;/a&gt; or running variational quantum eigensolvers, you need the full SDK. If you're trying to understand what a Hadamard gate does before committing to a 500MB install, you don't.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasir7tsp6euw61omv4hd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasir7tsp6euw61omv4hd.webp" alt="A quantum computer cryostat chandelier; gold-plated copper wiring and superconducting coils cooled to millikelvins above absolute zero" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Eight circuits worth trying
&lt;/h2&gt;

&lt;p&gt;Here are eight circuits you can paste directly into the &lt;a href="https://kitmul.com/en/visualizers-logic/quantum-circuit-simulator" rel="noopener noreferrer"&gt;simulator&lt;/a&gt; (or build them by clicking the gate buttons):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Superposition on a single qubit&lt;/strong&gt; (set qubits to 1):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|0&amp;gt;&lt;/code&gt; and &lt;code&gt;|1&amp;gt;&lt;/code&gt; each at 50%. This is the quantum coin flip.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Bell State&lt;/strong&gt; (set qubits to 2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
CX 0 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|00&amp;gt;&lt;/code&gt; and &lt;code&gt;|11&amp;gt;&lt;/code&gt; each at 50%. Maximal entanglement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. GHZ State&lt;/strong&gt; (set qubits to 3):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
CX 0 1
CX 1 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|000&amp;gt;&lt;/code&gt; and &lt;code&gt;|111&amp;gt;&lt;/code&gt; each at 50%. Three-qubit entanglement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Phase kickback&lt;/strong&gt; (set qubits to 2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X 1
H 0
CX 0 1
H 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: The Hadamard-CNOT-Hadamard sandwich. Watch how the phase of the target qubit kicks back to the control. This is the core mechanism behind &lt;a href="https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorithm" rel="noopener noreferrer"&gt;Deutsch's algorithm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Quantum NOT with Hadamard&lt;/strong&gt; (set qubits to 1):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H 0
Z 0
H 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|1&amp;gt;&lt;/code&gt; at 100%. The H-Z-H sequence is equivalent to a Pauli-X gate. This identity (HZH = X) shows up everywhere in quantum error correction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Phase gate demo&lt;/strong&gt; (set qubits to 1):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X 0
S 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|1&amp;gt;&lt;/code&gt; at 100%, but the amplitude is now &lt;code&gt;0.000 + 1.000i&lt;/code&gt;. The S gate adds a π/2 phase rotation. Compare with Z (which maps |1⟩ to -|1⟩).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Toffoli gate — quantum AND&lt;/strong&gt; (set qubits to 3):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X 0
X 1
CCX 0 1 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|111&amp;gt;&lt;/code&gt; at 100%. The Toffoli gate flips qubit 2 only when both qubits 0 and 1 are &lt;code&gt;|1&amp;gt;&lt;/code&gt;. This is the reversible equivalent of a classical AND gate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. SWAP circuit&lt;/strong&gt; (set qubits to 2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X 0
SWAP 0 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: &lt;code&gt;|01&amp;gt;&lt;/code&gt; at 100%. The state that was on qubit 0 has been transferred to qubit 1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CS students&lt;/strong&gt; taking their first quantum computing course. The simulator covers exactly what shows up in chapters 1-4 of &lt;a href="https://en.wikipedia.org/wiki/Quantum_Computation_and_Quantum_Information" rel="noopener noreferrer"&gt;Nielsen &amp;amp; Chuang&lt;/a&gt;; single-qubit gates, multi-qubit gates, entanglement, and measurement probabilities. The gate toolbar means you don't need to memorize syntax on day one, and the Bloch sphere gives you the geometric picture that textbooks struggle to convey in static figures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software engineers&lt;/strong&gt; curious about quantum computing but not ready to commit to a full SDK installation. You can verify your intuition in 30 seconds and then decide if it's worth going deeper with Qiskit or Cirq.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Physics students&lt;/strong&gt; who understand the math but want to quickly prototype small circuits without booting up Jupyter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teachers&lt;/strong&gt; who need a zero-friction demo tool for lectures. Share a URL; students see the circuit on their own devices. No lab setup. No installation instructions. No "my Python version is different" support tickets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy
&lt;/h2&gt;

&lt;p&gt;The entire simulation runs in your browser's JavaScript engine. No circuit data is transmitted to any server. No analytics track which gates you use. The URL encoding uses base64, which is decoded client-side. If you're working with proprietary circuit designs (up to 16 qubits), nothing leaves your device.&lt;/p&gt;

&lt;p&gt;If you want to explore other tools in the same category, the &lt;a href="https://dev.to/en/visualizers-logic"&gt;Visualizers &amp;amp; Logic Tools collection&lt;/a&gt; includes graph visualizers, truth tables, and logic gate simulators that pair well with quantum circuit work. For tracking your study sessions, the &lt;a href="https://dev.to/en/agile-project-management/pomodoro-agile"&gt;Pomodoro Timer&lt;/a&gt; with built-in focus music works surprisingly well for problem sets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point
&lt;/h2&gt;

&lt;p&gt;Quantum computing doesn't need to be gatekept by tooling complexity. The fundamental operations; Hadamard, Pauli gates, CNOT; are matrix multiplications. A browser can do matrix multiplication. So a browser can simulate small quantum circuits. And with a clickable gate toolbar and an interactive Bloch sphere, the barrier to entry drops even further.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/visualizers-logic/quantum-circuit-simulator" rel="noopener noreferrer"&gt;Quantum Circuit Simulator&lt;/a&gt; won't replace Qiskit for research. It replaces the 30 minutes between "I wonder what happens if I apply H then CX" and actually seeing the answer. For learning, that's everything.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Quantum Circuit Simulator is free, private, and runs entirely in your browser. No signup, no install, no data leaves your device. Part of the &lt;a href="https://dev.to/en/visualizers-logic"&gt;Visualizers &amp;amp; Logic Tools&lt;/a&gt; collection on Kitmul. Photos by &lt;a href="https://unsplash.com/@alexshuper" rel="noopener noreferrer"&gt;Alex Shuper&lt;/a&gt; and &lt;a href="https://unsplash.com/@dynamicwang" rel="noopener noreferrer"&gt;Dynamic Wang&lt;/a&gt; on Unsplash.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>quantum</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I Tracked My To-Do List for 30 Days. 73% of My 'Urgent' Work Was Someone Else's Emergency.</title>
      <dc:creator>Aral Roca</dc:creator>
      <pubDate>Tue, 21 Apr 2026 19:22:47 +0000</pubDate>
      <link>https://forem.com/aralroca/i-tracked-my-to-do-list-for-30-days-73-of-my-urgent-work-was-someone-elses-emergency-63</link>
      <guid>https://forem.com/aralroca/i-tracked-my-to-do-list-for-30-days-73-of-my-urgent-work-was-someone-elses-emergency-63</guid>
      <description>&lt;p&gt;I tracked every task I touched for 30 days. Not in a project management tool; in a spreadsheet with three columns: task, time spent, and whether it actually mattered a week later.&lt;/p&gt;

&lt;p&gt;The results were embarrassing. 73% of the tasks I labeled "urgent" on Monday were irrelevant by Friday. Most of them were other people's priorities that I adopted because they arrived with exclamation marks in the subject line. The remaining 27%; the things that actually moved my projects forward; had been sitting in my backlog for weeks, quietly aging while I answered Teams threads.&lt;/p&gt;

&lt;p&gt;That spreadsheet is why I stopped using to-do lists and started using the &lt;a href="https://kitmul.com/en/agile-project-management/eisenhower-matrix" rel="noopener noreferrer"&gt;Eisenhower Matrix&lt;/a&gt;. Not because I read a productivity book. Because the data made it impossible to keep pretending that busy equals productive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The urgent-important confusion
&lt;/h2&gt;

&lt;p&gt;Most people use the words "urgent" and "important" interchangeably. They're not. &lt;a href="https://en.wikipedia.org/wiki/Dwight_D._Eisenhower" rel="noopener noreferrer"&gt;Dwight Eisenhower&lt;/a&gt; understood this distinction better than anyone in the 20th century. The man commanded D-Day, served as NATO's first Supreme Commander, and ran the United States for eight years. His productivity wasn't a lifestyle hack; it was a survival requirement.&lt;/p&gt;

&lt;p&gt;His insight was simple: &lt;strong&gt;urgent means it demands attention now. Important means it contributes to your long-term goals.&lt;/strong&gt; These two axes are independent. A task can be both, either, or neither.&lt;/p&gt;

&lt;p&gt;The problem is that urgency hijacks your attention. A ringing phone feels more important than a strategic plan because your brain evolved to respond to immediate stimuli. &lt;a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow" rel="noopener noreferrer"&gt;Daniel Kahneman&lt;/a&gt; calls this System 1 thinking; fast, automatic, and terrible at prioritization. Every time you react to an urgent-but-unimportant task, you're letting your amygdala run your calendar.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four quadrants
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/agile-project-management/eisenhower-matrix" rel="noopener noreferrer"&gt;Eisenhower Matrix&lt;/a&gt; sorts every task into one of four quadrants:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q1: Urgent + Important (Do First)&lt;/strong&gt;&lt;br&gt;
Deadlines, crises, client emergencies. A production server is down. A contract expires tomorrow. These need immediate action and actually matter. The trap: if Q1 is always full, you're not prioritizing; you're firefighting. That means your Q2 is neglected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Not Urgent + Important (Schedule)&lt;/strong&gt;&lt;br&gt;
Strategic planning. Skill development. Relationship building. Exercise. Writing documentation. These tasks never scream at you, which is exactly why most people ignore them until they become Q1 crises. &lt;a href="https://en.wikipedia.org/wiki/The_7_Habits_of_Highly_Effective_People" rel="noopener noreferrer"&gt;Stephen Covey&lt;/a&gt; argued that Q2 is where your career lives. He was right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Urgent + Not Important (Delegate)&lt;/strong&gt;&lt;br&gt;
Most meetings. Most emails. Someone else's deadline that landed in your inbox. These feel urgent because another human being is waiting, but they don't move &lt;em&gt;your&lt;/em&gt; goals forward. Delegate them, batch them, or set boundaries. If you can't delegate, timebox them aggressively; 15 minutes for email twice a day, not 15 checks per hour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: Not Urgent + Not Important (Eliminate)&lt;/strong&gt;&lt;br&gt;
Excessive social media scrolling. Unnecessary meetings where you're CC'd "just in case." Reorganizing your desktop icons for the third time this week. Delete these from your day. They produce nothing and they consume the time you need for Q2 work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumkuxswg1dxlxo55kacd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumkuxswg1dxlxo55kacd.webp" alt="Sticky notes on a planning board; tasks waiting to be sorted by what actually matters" width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Q2 is where your career actually lives
&lt;/h2&gt;

&lt;p&gt;Here's the pattern I noticed in my 30-day audit. Every task that moved my career forward; shipping a feature, learning a new technology, writing a blog post, having a meaningful 1-on-1; was Q2. None of them were urgent when I started them. All of them were important.&lt;/p&gt;

&lt;p&gt;The tasks that consumed most of my time; answering Teams messages, attending status meetings, reviewing documents I wasn't the decision-maker on; were Q3. Urgent because someone was waiting. Unimportant because my absence wouldn't have changed the outcome.&lt;/p&gt;

&lt;p&gt;Research backs this up. A &lt;a href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/time-management-is-really-life-management" rel="noopener noreferrer"&gt;McKinsey study&lt;/a&gt; found that executives spend 28% of their day on email and another 19% on "gathering information." That's nearly half the workday on Q3 activities. The executives who outperform spend disproportionately more time on Q2: coaching, strategy, and capability building.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://calnewport.com/deep-work-rules-for-focused-success-in-a-distracted-world/" rel="noopener noreferrer"&gt;Cal Newport&lt;/a&gt; calls Q2 work "deep work" and argues it's the skill that separates people who build things from people who just stay busy. His framing is right: Q2 tasks require focused, uninterrupted blocks. You can't do strategic planning in the 4 minutes between Teams notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The daily ritual I actually use
&lt;/h2&gt;

&lt;p&gt;After the 30-day audit, I settled on a system that takes about 5 minutes each morning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Brain dump.&lt;/strong&gt; Write every task floating in your head into the &lt;a href="https://kitmul.com/en/agile-project-management/eisenhower-matrix" rel="noopener noreferrer"&gt;Eisenhower Matrix tool&lt;/a&gt;. Don't categorize yet; just get them out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sort by asking two questions.&lt;/strong&gt; For each task: "Does this have a real deadline?" (urgency) and "Will this matter in 30 days?" (importance). Drag it into the right quadrant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do Q1 first.&lt;/strong&gt; These are non-negotiable. But if Q1 has more than 3 items, something is wrong with your planning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Block time for Q2.&lt;/strong&gt; I use the &lt;a href="https://kitmul.com/en/agile-project-management/pomodoro-agile" rel="noopener noreferrer"&gt;Pomodoro Timer&lt;/a&gt; for this; 25-minute focus blocks with no Teams, no email, no notifications. Two Pomodoro blocks per morning on Q2 work changed my output more than any other habit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Q3.&lt;/strong&gt; Answer emails twice a day. Attend meetings only if you're the decision-maker. For recurring meetings, use the &lt;a href="https://kitmul.com/en/agile-project-management/meeting-cost-calculator" rel="noopener noreferrer"&gt;Meeting Cost Calculator&lt;/a&gt; once; when your team sees that the weekly sync costs $1,400 per session, they suddenly find ways to make it shorter.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kill Q4.&lt;/strong&gt; If it's not urgent and not important, why is it on your list?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole process takes 5 minutes. The return is measured in hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eisenhower vs other prioritization frameworks
&lt;/h2&gt;

&lt;p&gt;The matrix isn't the only prioritization tool. Here's when to use what:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;th&gt;Weakness&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Eisenhower Matrix&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Daily/weekly personal task sorting&lt;/td&gt;
&lt;td&gt;Doesn't handle dependencies or team capacity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/wsjf-calculator"&gt;WSJF&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Backlog prioritization with 15+ items&lt;/td&gt;
&lt;td&gt;Overkill for personal task lists&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/moscow-prioritization"&gt;MoSCoW&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scope negotiations with stakeholders&lt;/td&gt;
&lt;td&gt;Binary in/out; no nuance on ordering&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/rice-scoring"&gt;RICE&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Product feature ranking&lt;/td&gt;
&lt;td&gt;Requires reach/impact data you might not have&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/kano-model"&gt;Kano Model&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Understanding customer delight vs basics&lt;/td&gt;
&lt;td&gt;Research-heavy; not for daily planning&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Eisenhower Matrix works best for individual contributors and managers sorting their own week. If you need to prioritize a product backlog across a team, reach for &lt;a href="https://dev.to/en/agile-project-management/wsjf-calculator"&gt;WSJF&lt;/a&gt;. If you need to negotiate scope with a stakeholder, &lt;a href="https://dev.to/en/agile-project-management/moscow-prioritization"&gt;MoSCoW&lt;/a&gt; gives you a shared language. The tools solve different shapes of the same problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common mistakes I see
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Putting everything in Q1.&lt;/strong&gt; If everything is urgent and important, nothing is. You have a boundary problem, not a prioritization problem. Learn to say "I can do this Thursday" to things that arrive as "ASAP."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignoring Q2 because nothing is on fire.&lt;/strong&gt; Q2 tasks become Q1 tasks when you neglect them long enough. That "eventually upgrade the database" task? It's Q2 until the database crashes at 3 AM. Then it's Q1, it's expensive, and you're doing it under pressure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treating Q3 as Q1.&lt;/strong&gt; Just because someone sent you an urgent email doesn't make the task important to &lt;em&gt;your&lt;/em&gt; goals. Other people's urgency is not your emergency. The fact that your boss asked for something today doesn't automatically make it important; it makes it urgent. These are different concepts, and confusing them is the single most common time management failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Never reviewing the matrix.&lt;/strong&gt; Priorities change. A task that was Q2 on Monday might be Q1 by Wednesday because a deadline moved. Check your matrix every morning. It takes 2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part nobody tells you about Eisenhower
&lt;/h2&gt;

&lt;p&gt;The popular version of the Eisenhower Matrix comes from &lt;a href="https://en.wikipedia.org/wiki/The_7_Habits_of_Highly_Effective_People" rel="noopener noreferrer"&gt;Stephen Covey's &lt;em&gt;The 7 Habits of Highly Effective People&lt;/em&gt;&lt;/a&gt;, published in 1989. But the idea predates the book. Eisenhower himself reportedly said: &lt;em&gt;"What is important is seldom urgent and what is urgent is seldom important."&lt;/em&gt; The &lt;a href="https://www.eisenhowerlibrary.gov/" rel="noopener noreferrer"&gt;Eisenhower Presidential Library&lt;/a&gt; attributes this to a 1954 speech, though the exact wording is debated.&lt;/p&gt;

&lt;p&gt;What's not debated is the principle. Eisenhower managed the complexity of the Cold War, the Interstate Highway System, and the creation of NASA while reportedly maintaining a calm, structured approach to his days. He wasn't productive because he worked more hours. He was productive because he worked on the right things.&lt;/p&gt;

&lt;p&gt;The matrix is his most lasting contribution to productivity thinking, and unlike most productivity frameworks from that era, it scales perfectly to modern work. Swap "phone calls" for "Teams messages" and "memos" for "emails"; the quadrants haven't changed in 70 years because the human tendency to confuse urgency with importance hasn't changed either.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pair it with the right tools
&lt;/h2&gt;

&lt;p&gt;The Eisenhower Matrix handles the "what should I work on?" question. For the rest, pair it with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/pomodoro-agile"&gt;Pomodoro Timer&lt;/a&gt;&lt;/strong&gt; for executing Q1 and Q2 work in focused blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/standup-timer"&gt;Standup Timer&lt;/a&gt;&lt;/strong&gt; to keep daily standups short and Q3-efficient&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/okr-tracker"&gt;OKR Tracker&lt;/a&gt;&lt;/strong&gt; to define what "important" means at the quarterly level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/sprint-capacity-calculator"&gt;Sprint Capacity Calculator&lt;/a&gt;&lt;/strong&gt; to prevent overloading Q1 with more than your team can handle&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/en/agile-project-management/scrum-poker"&gt;Scrum Poker&lt;/a&gt;&lt;/strong&gt; to size tasks before categorizing them&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kitmul.com/en/agile-project-management/eisenhower-matrix" rel="noopener noreferrer"&gt;Eisenhower Matrix&lt;/a&gt; runs in your browser. Drag tasks between quadrants. No account, no sync service, no data uploaded. Your task list stays on your device.&lt;/p&gt;

&lt;p&gt;Start with tomorrow morning. Write down every task. Sort them into four boxes. Then watch how much of your "urgent" work turns out to be noise.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Eisenhower Matrix tool is free, private, and runs entirely in your browser. Part of the &lt;a href="https://dev.to/en/agile-project-management"&gt;Agile &amp;amp; Project Management toolkit&lt;/a&gt; on Kitmul.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>beginners</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
