<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Drlove Phukon</title>
    <description>The latest articles on Forem by Drlove Phukon (@drlove).</description>
    <link>https://forem.com/drlove</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/drlove"/>
    <language>en</language>
    <item>
      <title>How I built a file converter platform with a $0 backend (100% Client-Side Architecture)</title>
      <dc:creator>Drlove Phukon</dc:creator>
      <pubDate>Fri, 10 Apr 2026 06:39:47 +0000</pubDate>
      <link>https://forem.com/drlove/how-i-built-a-file-converter-platform-with-a-0-backend-100-client-side-architecture-i93</link>
      <guid>https://forem.com/drlove/how-i-built-a-file-converter-platform-with-a-0-backend-100-client-side-architecture-i93</guid>
      <description>&lt;p&gt;Every time I needed to quickly format a CSV dataset into JSON Lines or convert a Markdown file, I ran into the same annoying problem. The top Google results are always massive SaaS tools that demand an email signup, restrict file sizes, and worst of all—force you to upload your sensitive data to their backend servers.&lt;/p&gt;

&lt;p&gt;I didn't want to upload my local database dumps to a random server just to change the format.&lt;/p&gt;

&lt;p&gt;So, I built &lt;a href="https://myconverterpro.site/" rel="noopener noreferrer"&gt;MyConverterPro&lt;/a&gt;: a suite of developer and document tools that forces the browser to do 100% of the heavy lifting. Zero backend infrastructure. Zero server costs. Total data privacy.&lt;/p&gt;

&lt;p&gt;Here is the exact architecture, the packages I used, and the hard lessons I learned making the browser do the backend's job.&lt;/p&gt;

&lt;h2&gt;
  
  
  **The Stack &amp;amp; The State
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;To make this fast, the core is built on React 19 and Vite. Styling is handled by Tailwind 4 and Shadcn UI to keep the interface clean and out of the user's way.&lt;/p&gt;

&lt;p&gt;Because there is no database, I had to figure out how to track usage limits (a soft paywall after a certain number of conversions) without requiring a login.&lt;br&gt;
I used Zustand with local storage persistence. The store (useAppStore) splits state into two chunks:&lt;/p&gt;

&lt;p&gt;Persisted: conversionCount and isPremium are saved directly to localStorage.&lt;/p&gt;

&lt;p&gt;Session-only: adsWatchedThisSession resets on refresh, explicitly excluded using Zustand's partialize function.&lt;/p&gt;
&lt;h2&gt;
  
  
  **The Engine: Forcing the Browser to Work
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
The golden rule of the app: fetch() is never used to send a file payload. Everything happens in memory using the FileReader and Blob APIs.&lt;/p&gt;

&lt;p&gt;Here is what is handling the heavy lifting under the hood:&lt;/p&gt;

&lt;p&gt;PapaParse (CSV Processing): This is the workhorse for tools like CSV to JSONL and CSV to PDF. By parsing the file directly in the browser, I can map the data arrays instantly. For JSONL, the app iterates through the parsed rows, uses JSON.stringify, joins them with \n, and creates an application/x-jsonlines Blob to trigger a silent, immediate HTML download.&lt;/p&gt;

&lt;p&gt;jsPDF &amp;amp; jsPDF-AutoTable (Document Generation): For the CSV to PDF tool, PapaParse extracts the headers and rows, and hands them off to jsPDF. It calculates landscape orientations and uses autoTable to draw clean, formatted grids entirely in the user's RAM.&lt;/p&gt;

&lt;p&gt;Native Image APIs (Image to PDF): Instead of a heavy library, the app reads image files via FileReader, loads them into a native HTML5 Image object to calculate aspect ratios dynamically, and draws them onto a PDF canvas with a calculated padding scale.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;The Routing Architecture (Keeping the Bundle Small)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When you have 15+ completely different tools (SVG optimizers, Markdown parsers using marked, Base64 encoders), bundling them all together would create a massive, slow-loading JavaScript file.&lt;/p&gt;

&lt;p&gt;Instead of hardcoding every route, the app uses a dynamic routing engine built on React Router v7.&lt;br&gt;
I created a central tools.ts configuration array. Every tool component is wrapped in React.lazy and a  fallback.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The routing map automatically generates URLs and lazy-loads the heavy libraries only when clicked&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;TOOLS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Route&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt; &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ToolLoadingFallback&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SeoHead&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nt"&gt;component&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;))}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a user only wants to encode Base64, their browser never downloads the heavy jsPDF or mammoth (Word doc parsing) libraries.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Lessons Learned&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Main Thread Trap:&lt;br&gt;
When a user drops a massive 50MB CSV file into the Shadcn dropzone, PapaParse has to chew through millions of lines. Because JavaScript is single-threaded, doing this directly on the main thread freezes the entire React UI. The browser looks like it crashed. To fix this, heavy conversions must be pushed to Web Workers, letting the UI thread keep the loading spinners running smoothly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client-Side Word Documents are a Nightmare:&lt;br&gt;
Extracting raw text from a .docx file using mammoth.js works decently, but attempting to preserve complex tables, proprietary fonts, and exact margins to render directly into a PDF purely on the client side is an absolute nightmare compared to doing it in a Python backend.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Ask&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I am currently trying to optimize the Word to PDF conversion pipeline. mammoth is great for raw HTML extraction, but getting jsPDF to map that HTML back into a perfectly formatted, paginated PDF without a backend rendering engine is proving difficult.&lt;/p&gt;

&lt;p&gt;If anyone has successfully built a robust DOCX to PDF pipeline that runs 100% in the browser (without using an API like CloudConvert), I would love to hear how you handled the layout rendering.&lt;/p&gt;

&lt;p&gt;You can test the current speed of the client-side conversions here: &lt;a href="https://myconverterpro.site/" rel="noopener noreferrer"&gt;myconverterpro&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>javascript</category>
      <category>react</category>
      <category>frontend</category>
    </item>
  </channel>
</rss>
