<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Azeez Roheem</title>
    <description>The latest articles on Forem by Azeez Roheem (@azeezroheem).</description>
    <link>https://forem.com/azeezroheem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/azeezroheem"/>
    <language>en</language>
    <item>
      <title>Building an AI Invoice Generator in a Week (Week 8 of My NanoCrafts Build Curriculum)</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Wed, 08 Apr 2026 23:06:39 +0000</pubDate>
      <link>https://forem.com/azeezroheem/building-an-ai-invoice-generator-in-a-week-week-8-of-my-nanocrafts-build-curriculum-43i4</link>
      <guid>https://forem.com/azeezroheem/building-an-ai-invoice-generator-in-a-week-week-8-of-my-nanocrafts-build-curriculum-43i4</guid>
      <description>&lt;p&gt;Week 8 of my NanoCrafts build curriculum had one goal: ship a second SaaS product fast. After spending seven weeks building Resume AI Tailor — a full-stack AI resume rewriter with Stripe, usage limits, and saved resumes — I wanted to validate that the patterns I'd built up weren't project-specific. Could I take the same stack and ship something completely different in a single week?&lt;/p&gt;

&lt;p&gt;The invoice generator was the perfect test. Small scope, clear value, real utility. Type a sentence, get a professional PDF. No spreadsheets, no templates, no friction. Here's how I built it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The NLP Parser — The Most Interesting Part&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every invoice generator has a form. Client name, hours, rate, currency — you fill it in, you get a PDF. Boring. The interesting question was: what if you didn't have to?&lt;/p&gt;

&lt;p&gt;The core idea was simple. You type Invoice Acme for 10 hrs at £100/hr and the app figures out the rest. To make that work reliably, I built a two-tier parser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 — Regex (fast path)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first tier is pure regex. No API call, no latency, no cost. It handles the patterns that cover the vast majority of real inputs:&lt;/p&gt;

&lt;p&gt;Invoice Acme for 10 hrs at £100/hr&lt;br&gt;
Bill TechCorp 5 hours at $75 per hour&lt;br&gt;
Charge Wonka Co 3hrs £120/hr&lt;br&gt;
Send invoice to Globex for 8 hours, rate £50&lt;/p&gt;

&lt;p&gt;The regex extracts three things: client name, hours, and rate. Currency is inferred from the symbol — £ maps to GBP, $ to USD, € to EUR, with GBP as the default fallback. If all three fields are extracted successfully, the result comes back with confidence: "high" and the UI shows an "Auto-filled" badge in green.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 — OpenAI fallback&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the regex fails — ambiguous phrasing, unusual structure, missing fields — the input gets passed to gpt-4o-mini with a tightly scoped system prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Invoice Acme for 10 hrs at £100/hr
Bill TechCorp 5 hours at $75 per hour
Charge Wonka Co 3hrs £120/hr
Send invoice to Globex for 8 hours, rate £50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model returns structured JSON, which gets parsed and returned with confidence: "low", triggering a yellow "AI-assisted" badge in the UI. The user sees the same form either way — they can always correct the fields before saving.&lt;/p&gt;

&lt;p&gt;The confidence field is a small detail that pays off in UX. It tells the user exactly how much to trust the auto-fill without any extra explanation needed.&lt;br&gt;
**&lt;br&gt;
PDF Generation with @react-pdf/renderer&lt;br&gt;
**&lt;br&gt;
One of the advantages of building on a curriculum is that hard problems stay solved. I'd already integrated @react-pdf/renderer in Week 4 for Resume AI Tailor, so the mental model was already there — React components that describe a PDF layout, rendered server-side to a binary buffer and streamed back to the browser.&lt;/p&gt;

&lt;p&gt;The invoice PDF is a single  with one . The layout has five sections: a branded header with the NanoCrafts name and invoice number, a bill-to block with the client name and due date, a line items table with hours, rate, and amount columns, a totals block with subtotal, VAT at 20%, and grand total, and a payment terms footer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The gotchas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Three things caught me that are worth documenting.&lt;/p&gt;

&lt;p&gt;First, renderToBuffer is a named export, not a method on the default export. This sounds obvious but the library's own examples are inconsistent about it. The correct import is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const { renderToBuffer } = await import('@react-pdf/renderer');&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Not:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ReactPDF&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@react-pdf/renderer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ReactPDF&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;renderToBuffer&lt;/span&gt;&lt;span class="p"&gt;(...);&lt;/span&gt; &lt;span class="c1"&gt;// TypeError&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, renderToBuffer returns a Node.js Buffer, which isn't directly assignable to the Response body in Next.js App Router. You need to convert it first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pdfBuffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;renderToBuffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;element&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uint8&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pdfBuffer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uint8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/pdf&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Disposition&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`attachment; filename="&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;invoiceNumber&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.pdf"`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Third, @react-pdf/renderer must be in serverExternalPackages in next.config.ts, otherwise Next.js tries to bundle it client-side and throws. This is the same fix from Resume AI Tailor — it carried over directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The route&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The PDF endpoint lives at GET /api/invoices/[id]/pdf. It fetches the invoice from Neon, checks ownership against the Clerk userId, renders the PDF server-side, and streams the bytes back. The download link on each invoice card is a plain &lt;a href=""&gt; pointing at this route — no JavaScript needed for the download itself.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons vs Resume AI Tailor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The whole point of a curriculum is compounding. Each project should make the next one faster. Here's an honest breakdown of what carried over, what was new, and where I lost time anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What carried over directly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Clerk v7 setup was identical. proxy.ts instead of middleware.ts (a Next.js 16 convention change), async await auth(), protected routes via createRouteMatcher — copy, paste, done. Same for Drizzle ORM and Neon Postgres. The lib/db/index.ts singleton pattern, the drizzle.config.ts setup, even the drizzle-kit push workflow — all muscle memory by now.&lt;/p&gt;

&lt;p&gt;@react-pdf/renderer was the biggest time save. Week 4 cost me several hours figuring out RSC conflicts, Buffer conversions, and the serverExternalPackages config. Week 8 cost me about twenty minutes. That's what compounding looks like in practice.&lt;/p&gt;

&lt;p&gt;The Vercel deploy pipeline was also pre-solved. GitHub repo connected, environment variables via npx vercel env add, --prod flag. No surprises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What was genuinely new&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The NLP parser was the core new problem. Resume AI Tailor had no freetext input — everything came from structured file uploads. Building the two-tier regex and OpenAI fallback was the most interesting engineering work of the week.&lt;/p&gt;

&lt;p&gt;Optimistic UI for the status changes was also new. Resume AI Tailor had no real-time state updates — results were fetched once and displayed. The invoice dashboard needed instant feedback when marking an invoice as sent or paid, which meant updating local state immediately and reverting on failure. It's a small pattern but a useful one to have in the toolkit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where I lost time anyway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Honestly, the biggest time sink of the week had nothing to do with the product. It was a Turbopack and Tailwind v4 workspace root detection bug in Next.js 16. When the project folder sits inside a parent directory that has its own package.json or package-lock.json, Turbopack walks up the tree and tries to resolve CSS imports from the wrong root. The fix was simple — scaffold with create-next-app into a clean location rather than cloning the previous project. That lesson cost about three hours.&lt;/p&gt;

&lt;p&gt;The pattern I'd recommend: always start Week N fresh with create-next-app, then manually copy only the files you actually need from the previous project. It takes an extra thirty minutes but saves you from environment issues that are genuinely hard to debug.&lt;/p&gt;

&lt;p&gt;T*&lt;em&gt;ime saved overall&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;My rough estimate is that reusing patterns from Resume AI Tailor saved around four hours this week. Authentication, database setup, PDF rendering, and deployment would each have taken significantly longer from scratch. That's the compounding effect in action — and it will only get stronger as the curriculum continues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I'd Do Differently&lt;/strong&gt; &lt;br&gt;
Every project teaches you something you wish you'd known at the start. Here are the three things I'd change if I built this again.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;1. Start fresh, don't clone *&lt;/em&gt;&lt;br&gt;
I tried to save time by cloning Resume AI Tailor and stripping it down. It backfired. The nested folder structure triggered a Turbopack workspace detection bug that took hours to diagnose. Starting fresh with create-next-app would have taken thirty minutes and saved three hours. From now on, cloning is off the table. Copy individual files manually, never the whole project.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;2. Store line items in the database from day one *&lt;/em&gt;&lt;br&gt;
The current schema stores a single hours and rate column per invoice — a simplification I made to ship faster. Line items are supported in the UI but only the first one gets persisted. In practice, almost every real invoice has multiple line items — design work, development, meetings billed separately. The fix is a separate line_items table with a foreign key to invoices. I'll add this in a future iteration, but it would have been cleaner to design it correctly from the start rather than retrofitting.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;3. Make VAT configurable *&lt;/em&gt;&lt;br&gt;
VAT is hardcoded at 20% in the PDF component. That works for UK freelancers but breaks for everyone else — US freelancers don't charge VAT at all, EU rates vary by country. A simple vatRate field on the invoice with a default of 20 would have taken thirty minutes to add and made the product genuinely international from launch. Instead it's a v2 feature. The common thread across all three is the same: the shortcuts that feel like time savers at the start of the week tend to create friction at the end. The two that genuinely saved time — reusing Clerk and Drizzle patterns — were things I'd already invested in properly on a previous project. The shortcuts I took specifically for this project are the ones I'm now paying back.&lt;/p&gt;

&lt;p&gt;The invoice generator is live at invoice-generator-six-roan-49.vercel.app and the code is open on GitHub at github.com/Azeez1314/invoice-generator. Week 9 of the NanoCrafts curriculum is already planned — if you want to follow along, I post build updates on LinkedIn and Twitter. And if you're a freelancer who invoices clients, give it a try and let me know what's missing.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>saas</category>
      <category>typescript</category>
    </item>
    <item>
      <title>How I Structured User Data for My AI SaaS</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Sun, 22 Mar 2026 18:05:45 +0000</pubDate>
      <link>https://forem.com/azeezroheem/how-i-structured-user-data-for-my-ai-saas-461p</link>
      <guid>https://forem.com/azeezroheem/how-i-structured-user-data-for-my-ai-saas-461p</guid>
      <description>&lt;p&gt;Most developers building their first SaaS make the same mistake I almost made — they reach for sessionStorage because it works in the demo, then discover it breaks the moment a real user opens a second tab. This is the post I wish I'd had before Week 5.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem with sessionStorage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Resume Tailor's pipeline works like this: upload a PDF, paste a job description, get AI-rewritten bullets, download a tailored resume. In the demo, sessionStorage holds everything together — the parsed resume, the analysis, the rewritten bullets. It works perfectly. Until a user refreshes the page. Or opens the app on their phone after signing up on their laptop. Or closes the tab by accident. sessionStorage is scoped to a single browser tab. It doesn't survive a refresh. It doesn't sync across devices. It's fine for prototyping — it's not a database. The fix is obvious in hindsight: persist to Postgres, load from the database on every session. But getting there requires a set of decisions that aren't obvious at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Postgres over MongoDB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first decision was the database. MongoDB is the default choice for a lot of Node.js developers — flexible schema, JSON-native, easy to get started. For Resume Tailor I chose Postgres, and the reason comes down to one concept: referential integrity.&lt;/p&gt;

&lt;p&gt;In MongoDB, if you delete a user, their resume documents don't go anywhere. They sit in the collection, orphaned, pointing at a user ID that no longer exists. You have to remember to clean them up in application code. You will forget.&lt;/p&gt;

&lt;p&gt;In Postgres, you enforce the relationship at the schema level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;references&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cascade&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;onDelete: 'cascade'&lt;/em&gt; means when a user row is deleted, every resume row that references it is deleted automatically. The database guarantees clean data — not the application code, not a cron job, not a developer remembering to write the right query.&lt;/p&gt;

&lt;p&gt;Most developers don't think about what happens to related data when a user deletes their account. That one line is the answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The schema&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clerk owns identity. Postgres owns product data. The &lt;em&gt;clerkId&lt;/em&gt; field is the bridge between the two systems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pgTable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultRandom&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;clerkId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;clerk_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;unique&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;plan&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;plan&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;free&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;usage_count&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;usageLimit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;usage_limit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;created_at&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultNow&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;resumes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pgTable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resumes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultRandom&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;notNull&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;references&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cascade&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;

&lt;span class="na"&gt;jobTitle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;job_title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;jobDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;job_description&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;originalBullets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;original_bullets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;$type&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;rewrittenBullets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rewritten_bullets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;$type&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="na"&gt;keywords&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;keywords&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;pdfGenerated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pdf_generated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

&lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;created_at&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;defaultNow&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;jsonb&lt;/em&gt; for bullets and keywords — not text with &lt;em&gt;JSON.stringify&lt;/em&gt;. jsonb is queryable, indexed, and type-safe with Drizzle's &lt;em&gt;.$type()&lt;/em&gt;. The difference matters when you want to query resumes by keyword later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usage tracking — why atomic increment matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Free tier means 5 rewrites. Enforcing that limit sounds simple — read the count, check it, increment it. Here's the implementation most developers write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Read&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;clerkId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;clerkId&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;// Check&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageCount&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageLimit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Usage limit reached&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Write&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sql$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This has a race condition. If two requests arrive simultaneously, both read &lt;em&gt;usageCount: 4&lt;/em&gt;, both pass the check, both increment. The user ends up at 6 when the limit is 5. I verified this with a &lt;em&gt;Promise.all&lt;/em&gt; test — two simultaneous fetch calls to &lt;em&gt;/api/rewrite&lt;/em&gt;. Both returned 200. Both incremented.&lt;/p&gt;

&lt;p&gt;The fix is a single atomic query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sql$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

&lt;span class="nx"&gt;sql$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;clerkId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;clerkId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;AND&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;usageLimit&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;returning&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Usage limit reached&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;WHERE&lt;/em&gt; clause does the check and the increment in one shot. If two requests arrive simultaneously, only one matches the condition. The other gets no rows back and throws. No race condition possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ownership check — the one line that prevents a data breach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every resume has a UUID as its ID. UUIDs are hard to guess but not secret — they appear in API responses, URLs, and network logs. Any authenticated user could find a UUID and call &lt;em&gt;DELETE /api/resumes/:id&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Without an ownership check, they can delete anyone's data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Forbidden&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;403&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line is not optional. Fetch the row, compare the userId to the authenticated user's ID, return 403 if they don't match. Every delete, every update, every read of sensitive data needs this check.&lt;/p&gt;

&lt;p&gt;I tested it deliberately — signed in as User A, called DELETE with a resume ID belonging to User B. Without the check: 200, data gone. With the check: 403, data safe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the webhook solves&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clerk handles authentication — sign up, sign in, social providers, session management. But when a user signs up through Clerk, your Postgres database doesn't know about it.&lt;/p&gt;

&lt;p&gt;The webhook bridges the two systems:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;User&lt;/em&gt; signs up on &lt;em&gt;Clerk&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ &lt;em&gt;Clerk&lt;/em&gt; fires &lt;em&gt;POST&lt;/em&gt; to /api/webhooks/clerk&lt;/p&gt;

&lt;p&gt;→ &lt;em&gt;Route&lt;/em&gt; verifies the svix signature&lt;/p&gt;

&lt;p&gt;→ &lt;em&gt;Inserts&lt;/em&gt; a row into the users table&lt;/p&gt;

&lt;p&gt;→ clerkId links &lt;em&gt;Clerk&lt;/em&gt; and &lt;em&gt;Postgres&lt;/em&gt; from this point forward&lt;/p&gt;

&lt;p&gt;The same pattern handles account deletion — &lt;em&gt;user.deleted&lt;/em&gt; event fires, the route deletes the user row, the cascade foreign key cleans up every resume automatically.&lt;/p&gt;

&lt;p&gt;Without the webhook, your database has no record of who signed up. With it, every Clerk event that matters is reflected in Postgres within seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I'd tell myself before Week 5&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pick Postgres when relationships matter. Use &lt;em&gt;onDelete: cascade&lt;/em&gt; — don't leave orphan cleanup to application code. Make usage checks atomic. Always verify ownership before mutating data. Wire the webhook before you build anything that depends on user rows existing.&lt;/p&gt;

&lt;p&gt;These aren't advanced concepts. They're the decisions that separate a demo from a product.&lt;/p&gt;

&lt;p&gt;Resume Tailor is live at resume-ai-one-lac.vercel.app. Source on GitHub: github.com/Azeez1314/resume-ai.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>drizzle</category>
      <category>postgres</category>
      <category>saas</category>
    </item>
    <item>
      <title>From PDF to ATS-Optimised Resume in Three Steps — How I Built It with Next.js and OpenAI</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Sun, 15 Mar 2026 12:17:54 +0000</pubDate>
      <link>https://forem.com/azeezroheem/from-pdf-to-ats-optimised-resume-in-three-steps-how-i-built-it-with-nextjs-and-openai-43gd</link>
      <guid>https://forem.com/azeezroheem/from-pdf-to-ats-optimised-resume-in-three-steps-how-i-built-it-with-nextjs-and-openai-43gd</guid>
      <description>&lt;p&gt;Sometimes I would like to know the missing skills and what I already&lt;br&gt;
have on my resume. I wanted an app to fix the gaps automatically —&lt;br&gt;
so I could get a better resume and apply faster.&lt;/p&gt;

&lt;p&gt;As job applicants, we need to blend speed with ATS-optimised resumes.&lt;br&gt;
Checking a resume against a job description manually is time-consuming&lt;br&gt;
and requires knowledge most candidates don't have. Most people have&lt;br&gt;
never heard of ATS keywords — let alone know how to use them.&lt;/p&gt;

&lt;p&gt;So I built a pipeline to do it automatically. Here's how it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — PDF Upload and Extraction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline starts with a PDF upload. PDF is the standard format&lt;br&gt;
for resumes, but extracting clean text from one is harder than it&lt;br&gt;
sounds. Column layouts, custom fonts, and formatting cause the raw&lt;br&gt;
text to come out in the wrong order — a candidate's name can end up&lt;br&gt;
in the wrong section entirely.&lt;/p&gt;

&lt;p&gt;extractAndStructure() solves this in two steps. First it cleans the&lt;br&gt;
raw text — removing blank lines, trimming whitespace, and splitting&lt;br&gt;
it into named sections. Then it sends those sections to OpenAI, which&lt;br&gt;
reorganises them into a structured JSON object with fields for name,&lt;br&gt;
experience, skills, education, and contact details.&lt;/p&gt;

&lt;p&gt;Structured JSON is returned instead of raw text because every step&lt;br&gt;
after this needs to read specific fields. The analyse route needs the&lt;br&gt;
skills array. The rewrite route needs the experience highlights. Raw&lt;br&gt;
text would require re-parsing at every step — structuring once here&lt;br&gt;
makes the rest of the pipeline simple.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Match Analysis and Keyword Extraction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the user submits a job description, two functions run at the&lt;br&gt;
same time — extractKeywords() and analyseMatch(). They run in&lt;br&gt;
parallel because neither needs the other's result. Both only need&lt;br&gt;
the resume and job description, so there is no reason to wait.&lt;br&gt;
Running them simultaneously cuts the response time roughly in half.&lt;/p&gt;

&lt;p&gt;extractKeywords() pulls required skills, tools and technologies,&lt;br&gt;
and ATS keywords directly from the job description. analyseMatch()&lt;br&gt;
scores the resume against it — returning a match score out of 100,&lt;br&gt;
matched skills, and missing skills.&lt;/p&gt;

&lt;p&gt;The match score tells the user how well their resume fits the role&lt;br&gt;
before any rewriting begins. It is most useful for resumes scoring&lt;br&gt;
between 40 and 70 — where real gaps exist but the candidate is not&lt;br&gt;
a wrong fit. The user sees the score and extracted keywords before&lt;br&gt;
continuing, so they can decide whether to proceed with the rewrite&lt;br&gt;
or try a different role entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Bullet Rewriting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the user hits Rewrite &amp;amp; Continue, the pipeline combines&lt;br&gt;
missingSkills and atsKeywords into one target list. missingSkills&lt;br&gt;
comes from the match analysis — skills the resume lacks. atsKeywords&lt;br&gt;
comes from the job description itself — what the ATS scanner is&lt;br&gt;
looking for. Combining both gives the rewriter the most complete&lt;br&gt;
picture of what the role needs.&lt;/p&gt;

&lt;p&gt;For each bullet, three functions run in sequence. scoreBullet()&lt;br&gt;
rates the bullet on three criteria — action verb, skill or tool,&lt;br&gt;
and outcome. Bullets that score poorly are flagged for rewriting.&lt;br&gt;
Bullets that are already strong are left unchanged. rewriteBullet()&lt;br&gt;
then rewrites only the flagged bullets, incorporating the target&lt;br&gt;
keywords naturally. validateRewrite() runs immediately after each&lt;br&gt;
rewrite — checking that the keywords fit naturally and the&lt;br&gt;
truthfulness risk is low. If the rewrite fails validation, the&lt;br&gt;
original bullet is kept.&lt;/p&gt;

&lt;p&gt;The user sees a before and after comparison for every bullet that&lt;br&gt;
changed, a summary of how many were improved, and a button to&lt;br&gt;
download the tailored resume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I'd Do Differently&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline works best for candidates with match scores between&lt;br&gt;
40 and 70. In this range, real gaps exist but the candidate is not&lt;br&gt;
a wrong fit for the role. Above 80, the resume is already strong —&lt;br&gt;
rewriting bullets won't move the needle. Below 40, the role itself&lt;br&gt;
is likely the wrong target.&lt;/p&gt;

&lt;p&gt;The biggest limitation right now is PDF format. Most people write&lt;br&gt;
their resume in Word or Google Docs and export to PDF — but if the&lt;br&gt;
export is done incorrectly, the PDF becomes a scanned image with no&lt;br&gt;
text layer. The pipeline fails at the first step and the user has&lt;br&gt;
no way to continue without fixing their file. Word documents and&lt;br&gt;
Google Docs exports need to be supported directly in a future version.&lt;/p&gt;

&lt;p&gt;The final piece is PDF generation. Right now the pipeline analyses&lt;br&gt;
and rewrites but the output lives on a web page. Week 4 turns the&lt;br&gt;
rewritten JSON into a formatted, downloadable PDF — a complete,&lt;br&gt;
ATS-optimised resume the candidate can send directly to recruiters.&lt;br&gt;
That is what transforms this from a pipeline into a product.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Week 3 of my AI/ML learning curriculum. Week 2 covered&lt;br&gt;
the Node.js pipeline that powers this app. The full code is on&lt;br&gt;
GitHub: github.com/Azeez1314&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>ai</category>
      <category>typescript</category>
      <category>openai</category>
    </item>
    <item>
      <title>6 Job Applications Used to Consume My Day. Here's the AI Pipeline I Built to Fix That.</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Fri, 06 Mar 2026 10:40:27 +0000</pubDate>
      <link>https://forem.com/azeezroheem/6-job-applications-used-to-consume-my-day-heres-the-ai-pipeline-i-built-to-fix-that-301o</link>
      <guid>https://forem.com/azeezroheem/6-job-applications-used-to-consume-my-day-heres-the-ai-pipeline-i-built-to-fix-that-301o</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Applying for 6 roles used to consume my day. Refining my CV, &lt;br&gt;
checking for matching words, reordering bullet points — then the &lt;br&gt;
awkward silence. My friends voiced the same frustration. We weren't &lt;br&gt;
bad candidates. The process was just killing our output.&lt;/p&gt;

&lt;p&gt;The actual culprit is ATS software — the stern gatekeeper that blocks &lt;br&gt;
resumes from reaching the hiring manager. It filters applications when &lt;br&gt;
keywords don't match the job description closely enough.&lt;/p&gt;

&lt;p&gt;I built a pipeline to fix that. Here's how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Prompt Chain Works
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — PDF to Text&lt;/strong&gt;&lt;br&gt;
Most resumes arrive as PDFs — but PDFs don't store text cleanly. &lt;br&gt;
The pipeline extracts raw text, removes blank lines, trims whitespace, &lt;br&gt;
and separates it into named sections like Experience, Education and Skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — AI Structuring&lt;/strong&gt;&lt;br&gt;
Raw sections are not organised. OpenAI reorganises them into clean JSON. &lt;br&gt;
This fixes column ordering issues and joins split lines. When the &lt;br&gt;
candidate's name ends up in the wrong section, OpenAI pulls it out &lt;br&gt;
and places it correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Job Match Analysis&lt;/strong&gt;&lt;br&gt;
The structured resume is compared against the job description. &lt;br&gt;
The AI returns a match score, matched skills, and missing skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Bullet Rewriting&lt;/strong&gt;&lt;br&gt;
Each bullet is scored on three criteria: action verb, skill, and outcome. &lt;br&gt;
Weak bullets are rewritten to incorporate missing keywords naturally. &lt;br&gt;
A second AI call validates whether the rewrite is truthful before accepting it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Before/After Measurement&lt;/strong&gt;&lt;br&gt;
The original and tailored resumes are both scored against the job description. &lt;br&gt;
The difference tells you exactly how much the pipeline improved the match.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before and After
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Example 1:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before: &lt;em&gt;"Applied agile methodologies like SCRUM for project management"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is an example of what an ATS scanning for React or Node.js will &lt;br&gt;
just ignore. It has the right idea but stops short — right methodology, &lt;br&gt;
but without context. What are you building? What did SCRUM help you ship?&lt;/p&gt;

&lt;p&gt;After: &lt;em&gt;"Implemented Agile/SCRUM methodology to accelerate delivery &lt;br&gt;
across full-stack projects using React and Node.js"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;An ATS scanning for React or Node.js will now pick this up. It shows &lt;br&gt;
the tech stack, shows ownership, and connects the methodology to real work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before: &lt;em&gt;"Automated invoicing systems reducing admin time by up to 60%"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This already mentions a strong outcome — 60%, an actual figure worth &lt;br&gt;
retaining. But it doesn't mention any tool, so an ATS looking for &lt;br&gt;
Node.js or MongoDB sees nothing.&lt;/p&gt;

&lt;p&gt;After: &lt;em&gt;"Automated invoicing systems using Node.js and MongoDB, &lt;br&gt;
reducing admin time by up to 60% for SaaS clients"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Same outcome, same honesty — but now it's visible to ATS systems &lt;br&gt;
scanning for the right keywords.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The 0 Delta Surprise&lt;/strong&gt;&lt;br&gt;
When I measured before and after, the score didn't move. My initial &lt;br&gt;
reaction was that something was wrong. However, nothing was wrong. &lt;br&gt;
This is the correct behaviour when a resume already matches the role. &lt;br&gt;
I discovered the pipeline is most valuable for resumes with match &lt;br&gt;
scores of 40-70. High-scoring resumes (80+) need new experience, &lt;br&gt;
not rewriting. Low-scoring resumes (under 40) need a different role target.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Force-Fitting Problem&lt;/strong&gt;&lt;br&gt;
The AI kept importing keywords that didn't belong — prompt engineering &lt;br&gt;
appearing in a PDF pipeline bullet, MongoDB showing up in a SCRUM bullet. &lt;br&gt;
The fix is not instructing the AI not to hallucinate. It is passing only &lt;br&gt;
keywords that are truly missing and relevant to that specific bullet's &lt;br&gt;
context. A smarter pre-filter before the rewrite call would have solved this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's Next&lt;/strong&gt;&lt;br&gt;
The output at the end of Week 2 was JSON — which a developer would &lt;br&gt;
understand, but a job applicant needs a document they can forward to &lt;br&gt;
recruiters. In Week 3, that JSON gets turned into a formatted, tailored &lt;br&gt;
PDF — a ready-made resume. This is what transforms the project into a product.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Week 2 of my AI/ML learning curriculum. Week 1 covered &lt;br&gt;
OpenAI API fundamentals and keyword extraction. The full code is &lt;br&gt;
on GitHub: github.com/Azeez1314&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>career</category>
      <category>node</category>
    </item>
    <item>
      <title>Building My First AI Resume Parsing Script (Lessons Learned)</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Sun, 01 Mar 2026 11:57:36 +0000</pubDate>
      <link>https://forem.com/azeezroheem/building-my-first-ai-resume-parsing-script-lessons-learned-56l1</link>
      <guid>https://forem.com/azeezroheem/building-my-first-ai-resume-parsing-script-lessons-learned-56l1</guid>
      <description>&lt;p&gt;Every job applicant knows the feeling — you find a great role, &lt;br&gt;
read through the requirements, and spend 20 minutes manually &lt;br&gt;
figuring out which of your skills to highlight. I wanted to &lt;br&gt;
automate that using the OpenAI API. What I didn't expect was &lt;br&gt;
that the hardest part wouldn't be the AI — it would be &lt;br&gt;
controlling what the AI actually returns.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Set Out to Build
&lt;/h2&gt;

&lt;p&gt;Job applicants face two problems. First, their CV often doesn't &lt;br&gt;
get picked because it doesn't match the language the employer &lt;br&gt;
used in the job description. Second, going through each job &lt;br&gt;
description manually is slow — it limits how many quality &lt;br&gt;
applications you can submit in a day.&lt;/p&gt;

&lt;p&gt;I wanted to build a script that solves both. Give it any job &lt;br&gt;
description, get back a structured list of the exact keywords &lt;br&gt;
to highlight on your resume. Simple idea. Harder to build &lt;br&gt;
than I expected.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Tried That Didn't Work
&lt;/h2&gt;

&lt;p&gt;My first attempt looked like it was working. I sent a job &lt;br&gt;
description to the API and got something back that looked &lt;br&gt;
like JSON. But when I tried to use it in my script, &lt;br&gt;
everything broke.&lt;/p&gt;

&lt;p&gt;Here's what the raw output actually looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"keywords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"Senior Frontend Engineer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"TypeScript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"REST APIs"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the backticks wrapping it. That's markdown formatting — &lt;br&gt;
not valid JSON. When I ran JSON.parse() on that raw string, &lt;br&gt;
my script crashed immediately.&lt;/p&gt;

&lt;p&gt;But the backticks were only part of the problem. I ran the &lt;br&gt;
same job description 5 times and got 5 slightly different &lt;br&gt;
results. Keywords were dropped between runs. Phrasing changed. &lt;br&gt;
On one run "strong communication skills" appeared, on another &lt;br&gt;
just "communication skills." The structure itself changed — &lt;br&gt;
sometimes the model added explanation text before the JSON, &lt;br&gt;
sometimes not.&lt;/p&gt;

&lt;p&gt;The problem wasn't dramatic failures. It was subtle &lt;br&gt;
inconsistency that would silently corrupt a real application &lt;br&gt;
over time. I was being polite with my prompt — "return the &lt;br&gt;
results in JSON format" — when I needed to be strict.&lt;/p&gt;

&lt;p&gt;I also made the mistake of setting temperature to 1 while &lt;br&gt;
testing. Higher temperature introduces randomness — useful &lt;br&gt;
for creative tasks, catastrophic for structured extraction. &lt;br&gt;
Temperature 1 didn't just vary the content. It varied the &lt;br&gt;
structure itself, meaning different field names on every run. &lt;br&gt;
For extraction tasks, always use temperature 0.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Moment response_format Changed Everything
&lt;/h2&gt;

&lt;p&gt;I spent longer than I'd like to admit fighting with output &lt;br&gt;
format before I found this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;response_format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;json_object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single line changed everything. Here's the difference &lt;br&gt;
between asking and enforcing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before — prompt instruction only:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Extract keywords and return them in JSON format.`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: inconsistent, backtick-wrapped, unparseable output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After — API-level enforcement:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;response_format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;json_object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[...]&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: clean, consistent, directly parseable JSON every time.&lt;/p&gt;

&lt;p&gt;When you add response_format you're not giving the model &lt;br&gt;
another instruction. You're flipping a switch at the API &lt;br&gt;
level that forces valid JSON output, validates it before &lt;br&gt;
returning it to you, and guarantees JSON.parse() will &lt;br&gt;
never throw on the response.&lt;/p&gt;

&lt;p&gt;No backticks. No explanation text. No inconsistency.&lt;/p&gt;


&lt;h2&gt;
  
  
  What the Final Script Does
&lt;/h2&gt;

&lt;p&gt;The finished extractor takes any job description and returns &lt;br&gt;
a structured JSON object with 7 fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"job_title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"experience_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"required_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nice_to_have_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tools_and_technologies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"soft_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ats_keywords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a real example. Input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Senior React Developer. 3+ years experience required.
Must know React, TypeScript, and REST APIs.
Nice to have: GraphQL, AWS.
Tools: Figma, GitHub, Jira.
Strong communication skills required.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"job_title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Senior React Developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"experience_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3+ years"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"required_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TypeScript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REST APIs"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nice_to_have_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"GraphQL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tools_and_technologies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Figma"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GitHub"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jira"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"soft_skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"strong communication skills"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ats_keywords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Senior React Developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; 
                   &lt;/span&gt;&lt;span class="s2"&gt;"TypeScript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REST APIs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Figma"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script also includes retry logic with exponential backoff, &lt;br&gt;
input validation that rejects empty or very short descriptions &lt;br&gt;
before spending tokens, output validation that catches &lt;br&gt;
meaningless results, and token tracking on every call so &lt;br&gt;
you're always aware of cost.&lt;/p&gt;

&lt;p&gt;Full code is on GitHub: github.com/Azeez1314/ai-keyword-extractor&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Add response_format on day one.&lt;/strong&gt; I spent hours fighting &lt;br&gt;
inconsistent output that vanished the moment I added that &lt;br&gt;
single line. Enforce structure at the API level — not just &lt;br&gt;
through prompt instructions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Take prompt engineering more seriously from the start.&lt;/strong&gt; &lt;br&gt;
I assumed the model would figure out what I meant. It didn't &lt;br&gt;
— it did exactly what I said, which was often not what I &lt;br&gt;
meant. Every vague instruction became a bug. Every specific &lt;br&gt;
constraint became a feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Track token usage from the first call.&lt;/strong&gt; My system prompt &lt;br&gt;
grew from 133 tokens to 700 tokens through refinement. That's &lt;br&gt;
a 5x cost increase per call that I only noticed at the end. &lt;br&gt;
Log tokens on every call from day one.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This extractor is the first step in a larger pipeline. The &lt;br&gt;
next version will accept both a job description and a &lt;br&gt;
candidate's resume, compare the extracted keywords against &lt;br&gt;
the candidate's actual experience, and suggest specific &lt;br&gt;
improvements to make the resume a closer match.&lt;/p&gt;

&lt;p&gt;If you're building something similar or have questions about &lt;br&gt;
any part of this, the full code is available at:&lt;br&gt;
github.com/Azeez1314/ai-keyword-extractor&lt;/p&gt;

</description>
      <category>ai</category>
      <category>node</category>
    </item>
    <item>
      <title>Building My Developer Portfolio with Next.js, shadcn/ui &amp; Google Cloud Run 🚀</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Sun, 01 Feb 2026 23:11:16 +0000</pubDate>
      <link>https://forem.com/azeezroheem/building-my-developer-portfolio-with-nextjs-shadcnui-google-cloud-run-3eae</link>
      <guid>https://forem.com/azeezroheem/building-my-developer-portfolio-with-nextjs-shadcnui-google-cloud-run-3eae</guid>
      <description>&lt;h2&gt;
  
  
  Hey there! 👋
&lt;/h2&gt;

&lt;p&gt;I'm &lt;strong&gt;Azeez Roheem&lt;/strong&gt;, a full-stack JavaScript developer with 5+ years of experience building scalable ecommerce systems and impact-driven digital products. When I saw the &lt;strong&gt;"New Year, New You" Portfolio Challenge&lt;/strong&gt; by Google AI, I knew it was the perfect opportunity to finally revamp my portfolio website.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Why I Joined This Challenge
&lt;/h2&gt;

&lt;p&gt;My old portfolio was outdated and didn't reflect my current skills or the work I've been doing. I'd been putting off updating it for months (we've all been there, right? 😅). &lt;/p&gt;

&lt;p&gt;When I discovered this challenge, I thought: &lt;em&gt;"What better motivation than a deadline, prizes, and the chance to learn something new?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I wanted to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give my portfolio a fresh, modern look&lt;/li&gt;
&lt;li&gt;Explore deploying to Google Cloud for the first time&lt;/li&gt;
&lt;li&gt;Push myself to learn new tools and workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🖥️ Live Portfolio
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://developer-portfolio-jgtmqruaaq-uc.a.run.app/"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Live URL:&lt;/strong&gt; &lt;a href="https://developer-portfolio-jgtmqruaaq-uc.a.run.app/" rel="noopener noreferrer"&gt;https://developer-portfolio-jgtmqruaaq-uc.a.run.app/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Tech Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Framework&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Next.js 14 (App Router)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Styling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tailwind CSS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Components&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;shadcn/ui (customized)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Animations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Framer Motion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Google Cloud Run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Container&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker (multi-stage build)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🎨 Design Approach
&lt;/h2&gt;

&lt;p&gt;I wanted something that stands out from typical developer portfolios. I went with an &lt;strong&gt;editorial meets modern&lt;/strong&gt; aesthetic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Playfair Display&lt;/strong&gt; serif font for elegant headlines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JetBrains Mono&lt;/strong&gt; for code elements&lt;/li&gt;
&lt;li&gt;Warm cream background with vibrant orange accents&lt;/li&gt;
&lt;li&gt;Subtle grain texture for depth&lt;/li&gt;
&lt;li&gt;Animated code window in the hero section&lt;/li&gt;
&lt;li&gt;Smooth scroll-triggered animations with Framer Motion&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚧 The Interesting Challenge: First Time with Google Cloud!
&lt;/h2&gt;

&lt;p&gt;This was my &lt;strong&gt;first time deploying anything to Google Cloud&lt;/strong&gt;, and honestly, it was both exciting and a bit nerve-wracking!&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Learned Along the Way
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Setting up the Google Cloud CLI&lt;/strong&gt;&lt;br&gt;
I had to install the &lt;code&gt;gcloud&lt;/code&gt; CLI and authenticate. Simple enough, but it was new territory for me.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth login
gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project personal-portfolio-486121
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Docker Multi-Stage Builds&lt;/strong&gt;&lt;br&gt;
I learned how to optimize my Docker image using multi-stage builds to keep the final image small:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: Dependencies&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci

&lt;span class="c"&gt;# Stage 2: Build&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=deps /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Stage 3: Production (minimal image)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runner&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/.next/standalone ./&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/.next/static ./.next/static&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Cloud Build + Cloud Run Deployment&lt;/strong&gt;&lt;br&gt;
The actual deployment was surprisingly straightforward once everything was set up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the container&lt;/span&gt;
gcloud builds submit &lt;span class="nt"&gt;--tag&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/developer-portfolio

&lt;span class="c"&gt;# Deploy to Cloud Run&lt;/span&gt;
gcloud run deploy developer-portfolio &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/developer-portfolio &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--platform&lt;/span&gt; managed &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-central1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allow-unauthenticated&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within minutes, my portfolio was live with a real URL! 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Aha!" Moment
&lt;/h3&gt;

&lt;p&gt;Seeing my portfolio deployed and accessible from anywhere was incredibly satisfying. Google Cloud Run handles all the scaling automatically — if my portfolio suddenly gets a lot of traffic (fingers crossed! 🤞), it'll scale up. If no one's visiting, it scales down to zero. Pretty neat for a portfolio site!&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 Exploring AI in the Process
&lt;/h2&gt;

&lt;p&gt;While building this portfolio, I also explored using AI as a development companion. It helped me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate boilerplate code faster&lt;/li&gt;
&lt;li&gt;Debug deployment issues&lt;/li&gt;
&lt;li&gt;Refine my design approach&lt;/li&gt;
&lt;li&gt;Write cleaner, more maintainable components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's amazing how AI tools can accelerate the development process when you're learning new platforms like Google Cloud.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✨ Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Animated Hero Section
&lt;/h3&gt;

&lt;p&gt;A floating code window that "types out" my developer profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;developer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Azeez Roheem&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;React&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Next.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;focus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Scalable Solutions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;available&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Interactive Project Cards
&lt;/h3&gt;

&lt;p&gt;Hover effects that reveal project details with smooth transitions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scroll-Triggered Animations
&lt;/h3&gt;

&lt;p&gt;Using Framer Motion's &lt;code&gt;whileInView&lt;/code&gt; for elegant reveal effects as you scroll.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fully Responsive
&lt;/h3&gt;

&lt;p&gt;Mobile-first design that looks great on any device.&lt;/p&gt;




&lt;h2&gt;
  
  
  📁 My Projects Showcased
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. NanoCrafts - Software Consulting Agency&lt;/strong&gt;&lt;br&gt;
My technology consulting firm delivering end-to-end digital transformation services.&lt;br&gt;
🔗 &lt;a href="https://www.nanocrafts.xyz/" rel="noopener noreferrer"&gt;nanocrafts.xyz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Teeq - Ecommerce Dashboard&lt;/strong&gt;&lt;br&gt;
Full-stack e-commerce solution with product management and inventory tracking.&lt;br&gt;
🔗 &lt;a href="https://teeq.vercel.app/" rel="noopener noreferrer"&gt;teeq.vercel.app&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🎓 Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud Run is beginner-friendly&lt;/strong&gt; — Even as a first-timer, I got my app deployed without major issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker knowledge pays off&lt;/strong&gt; — Understanding containers made the deployment process much smoother&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI accelerates learning&lt;/strong&gt; — Using AI tools helped me quickly understand new concepts and debug issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Challenges create motivation&lt;/strong&gt; — Sometimes you just need a deadline to finally update that portfolio!&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🔗 Connect With Me
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Portfolio:&lt;/strong&gt; &lt;a href="https://developer-portfolio-jgtmqruaaq-uc.a.run.app/" rel="noopener noreferrer"&gt;developer-portfolio-jgtmqruaaq-uc.a.run.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Azeez1314" rel="noopener noreferrer"&gt;github.com/Azeez1314&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn:&lt;/strong&gt; &lt;a href="https://linkedin.com/in/azeezroheem" rel="noopener noreferrer"&gt;linkedin.com/in/azeezroheem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Website:&lt;/strong&gt; &lt;a href="https://azeezroheem.dev" rel="noopener noreferrer"&gt;azeezroheem.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="mailto:nanocrafts199@gmail.com"&gt;nanocrafts199@gmail.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🙏 Thanks for Reading!
&lt;/h2&gt;

&lt;p&gt;This challenge pushed me out of my comfort zone and taught me something new. If you've been putting off updating your portfolio (like I was), take this as your sign to just start!&lt;/p&gt;

&lt;p&gt;Drop a ❤️ if you found this helpful, or leave a comment — I'd love to hear about your experience with the challenge!&lt;/p&gt;

&lt;p&gt;Happy coding! 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Next.js, shadcn/ui, Tailwind CSS, and deployed on Google Cloud Run for the Google AI "New Year, New You" Portfolio Challenge 2026&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>career</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Using Postgres Full-Text Search on a Next.JS Fullstack App</title>
      <dc:creator>Azeez Roheem</dc:creator>
      <pubDate>Fri, 02 Jan 2026 20:43:09 +0000</pubDate>
      <link>https://forem.com/azeezroheem/using-postgres-full-text-search-on-a-nextjs-fullstack-app-c8g</link>
      <guid>https://forem.com/azeezroheem/using-postgres-full-text-search-on-a-nextjs-fullstack-app-c8g</guid>
      <description>&lt;p&gt;Although workshops can be both instructive and exhausting, I made certain to complete Brian Holt’s workshop on developing a comprehensive Fullstack Next.js application. You may review it on &lt;a href="https://frontendmasters.com/courses/fullstack-app-next-v4/" rel="noopener noreferrer"&gt;Frontendmasters&lt;/a&gt;. I intend to enhance the application by integrating comprehensive full-text search capabilities. I plan to systematically record the procedure for future reference.&lt;/p&gt;

&lt;p&gt;Based on this, I assume the:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Next.js full-stack app is completed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It includes a PostgreSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and it contains some data to search for.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The processes below are the steps I followed and researched to implement the functionality. We will build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Full-text search using Postgres&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Powered by Neon&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The query will be done through Drizzle ORM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;accessed through Next.js API route&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;used on the search page UI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the application is a blog app, we will add:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Full-text search index by running it inside Neon SQL Editor (this would have been set up while building the app).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connection to Drizzle Database&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE INDEX articles_search_idx
 ON articles
 USING GIN (
   to_tsvector(
     'english',
     title || ' ' || summary || ' ' || content
   )
 );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this &lt;code&gt;(articles_search_idx)&lt;/code&gt; creates a search index on the article, you can use any other name you prefer. This intends to make the search fast and sort by relevance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connection to Drizzle Database&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is expected that the database would have been connected while building the fullstack application. However, if not done, use the sample below:&lt;/p&gt;

&lt;p&gt;📁 db/index.ts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; import { drizzle } from "drizzle-orm/neon-http";
 import { neon } from "@neondatabase/serverless";
 import * as schema from "./schema";

 const sql = neon(process.env.DATABASE_URL!);

 export const db = drizzle(sql, { schema });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the Search API Folder/Routes for Articles&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📁 app/api/search/route.ts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { NextResponse } from "next/server";
 import { db } from "@/db";
 import { sql } from "drizzle-orm";

 export async function GET(req: Request) {
   const { searchParams } = new URL(req.url);
   const query = searchParams.get("q");

   if (!query) {
     return NextResponse.json([]);
   }

   const results = await db.execute(sql`
     SELECT 
       id,
       title,
       slug,
       summary,
       image_url,
       created_at,
       ts_rank(
         to_tsvector('english', title || ' ' || COALESCE(summary, '') || ' ' || content),
         plainto_tsquery('english', ${query})
       ) AS rank
     FROM articles
     WHERE published = true
       AND to_tsvector(
         'english',
         title || ' ' || COALESCE(summary, '') || ' ' || content
       )
       @@ plainto_tsquery('english', ${query})
     ORDER BY rank DESC
     LIMIT 20
   `);

   return NextResponse.json(results.rows);
 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the easy way I use to understand this is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I type a search criterion in the box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It calls &lt;code&gt;/api/search?q=term&lt;/code&gt; from the frontend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the backend, it: - turns the character into searchable form&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            -finds matching articles

            -score them based on relevance
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It returns the best matches and displays results.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Install Shadcn components&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I used Shadcn and Tailwind for the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx shadcn@latest init
 npx shadcn@latest add input
 npx shadcn@latest add button
 npx shadcn@latest add card
 npx shadcn@latest add separator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Creation of UI for Searches (Articles)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📁 app/search/page.tsx&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"use client";

 import { useState } from "react";
 import Link from "next/link";

 import { Input } from "@/components/ui/input";
 import { Button } from "@/components/ui/button";
 import { Card, CardContent } from "@/components/ui/card";
 import { Separator } from "@/components/ui/separator";

 type Article = {
   id: number;
   title: string;
   slug: string;
   summary: string | null;
   created_at: string;
 };

 export default function ArticleSearchPage() {
   const [query, setQuery] = useState("");
   const [results, setResults] = useState&amp;lt;Article[]&amp;gt;([]);
   const [loading, setLoading] = useState(false);

   async function handleSearch() {
     if (!query.trim()) return;

     setLoading(true);

     const res = await fetch(
       `/api/search?q=${encodeURIComponent(query)}`
     );

     const data = await res.json();
     setResults(data);
     setLoading(false);
   }

   return (
     &amp;lt;div className="mx-auto max-w-3xl px-4 py-10"&amp;gt;
       {/* Page Header */}
       &amp;lt;div className="mb-8 space-y-2 text-center"&amp;gt;
         &amp;lt;h1 className="text-3xl font-bold tracking-tight"&amp;gt;
           Search Articles
         &amp;lt;/h1&amp;gt;
         &amp;lt;p className="text-muted-foreground"&amp;gt;
           Find articles by title, summary, or content
         &amp;lt;/p&amp;gt;
       &amp;lt;/div&amp;gt;

       {/* Search Input */}
       &amp;lt;div className="flex gap-2"&amp;gt;
         &amp;lt;Input
           value={query}
           onChange={(e) =&amp;gt; setQuery(e.target.value)}
           placeholder="Search articles..."
           className="flex-1"
         /&amp;gt;
         &amp;lt;Button onClick={handleSearch} disabled={loading}&amp;gt;
           {loading ? "Searching..." : "Search"}
         &amp;lt;/Button&amp;gt;
       &amp;lt;/div&amp;gt;

       &amp;lt;Separator className="my-8" /&amp;gt;

       {/* Results */}
       &amp;lt;div className="space-y-4"&amp;gt;
         {results.length === 0 &amp;amp;&amp;amp; !loading &amp;amp;&amp;amp; query &amp;amp;&amp;amp; (
           &amp;lt;p className="text-center text-sm text-muted-foreground"&amp;gt;
             No articles found.
           &amp;lt;/p&amp;gt;
         )}

         {results.map((article) =&amp;gt; (
           &amp;lt;Card key={article.id} className="hover:shadow-md transition"&amp;gt;
             &amp;lt;CardContent className="p-6 space-y-2"&amp;gt;
               &amp;lt;Link href={`/articles/${article.slug}`}&amp;gt;
                 &amp;lt;h3 className="text-lg font-semibold hover:underline"&amp;gt;
                   {article.title}
                 &amp;lt;/h3&amp;gt;
               &amp;lt;/Link&amp;gt;

               {article.summary &amp;amp;&amp;amp; (
                 &amp;lt;p className="text-sm text-muted-foreground line-clamp-3"&amp;gt;
                   {article.summary}
                 &amp;lt;/p&amp;gt;
               )}

               &amp;lt;p className="text-xs text-muted-foreground"&amp;gt;
                 {new Date(article.created_at).toDateString()}
               &amp;lt;/p&amp;gt;
             &amp;lt;/CardContent&amp;gt;
           &amp;lt;/Card&amp;gt;
         ))}
       &amp;lt;/div&amp;gt;
     &amp;lt;/div&amp;gt;
   );
 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It links as follows:&lt;/p&gt;

&lt;p&gt;When a user searches, the frontend updates the query.&lt;/p&gt;

&lt;p&gt;After clicking search, handleSearch() runs, then the API runs &lt;code&gt;/api/search?q=...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;backend runs Postgres full-text search&lt;/p&gt;

&lt;p&gt;Results are returned as JSON, and the UI is updated.&lt;/p&gt;

&lt;p&gt;Optional: live search&lt;/p&gt;

&lt;p&gt;We can make search behave like Google search. It starts searching immediately a user starts by typing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"use client";

 import { useState } from "react";
 import Link from "next/link";

 import { Input } from "@/components/ui/input";
 import { Button } from "@/components/ui/button";
 import { Card, CardContent } from "@/components/ui/card";
 import { Separator } from "@/components/ui/separator";

 type Article = {
   id: number;
   title: string;
   slug: string;
   summary: string | null;
   created_at: string;
 };

 export default function ArticleSearchPage() {
   const [query, setQuery] = useState("");
   const [results, setResults] = useState&amp;lt;Article[]&amp;gt;([]);
   const [loading, setLoading] = useState(false);

   async function handleSearch() {
     if (!query.trim()) return;

     setLoading(true);

     const res = await fetch(
       `/api/search?q=${encodeURIComponent(query)}`
     );

     const data = await res.json();
     setResults(data);
     setLoading(false);
   }

   return (
     &amp;lt;div className="mx-auto max-w-3xl px-4 py-10"&amp;gt;
       {/* Page Header */}
       &amp;lt;div className="mb-8 space-y-2 text-center"&amp;gt;
         &amp;lt;h1 className="text-3xl font-bold tracking-tight"&amp;gt;
           Search Articles
         &amp;lt;/h1&amp;gt;
         &amp;lt;p className="text-muted-foreground"&amp;gt;
           Find articles by title, summary, or content
         &amp;lt;/p&amp;gt;
       &amp;lt;/div&amp;gt;

       {/* Search Input */}
       &amp;lt;div className="flex gap-2"&amp;gt;
         &amp;lt;Input
           value={query}
           onChange={(e) =&amp;gt; setQuery(e.target.value)}
           placeholder="Search articles..."
           className="flex-1"
         /&amp;gt;
         &amp;lt;Button onClick={handleSearch} disabled={loading}&amp;gt;
           {loading ? "Searching..." : "Search"}
         &amp;lt;/Button&amp;gt;
       &amp;lt;/div&amp;gt;

       &amp;lt;Separator className="my-8" /&amp;gt;

       {/* Results */}
       &amp;lt;div className="space-y-4"&amp;gt;
         {results.length === 0 &amp;amp;&amp;amp; !loading &amp;amp;&amp;amp; query &amp;amp;&amp;amp; (
           &amp;lt;p className="text-center text-sm text-muted-foreground"&amp;gt;
             No articles found.
           &amp;lt;/p&amp;gt;
         )}

         {results.map((article) =&amp;gt; (
           &amp;lt;Card key={article.id} className="hover:shadow-md transition"&amp;gt;
             &amp;lt;CardContent className="p-6 space-y-2"&amp;gt;
               &amp;lt;Link href={`/articles/${article.slug}`}&amp;gt;
                 &amp;lt;h3 className="text-lg font-semibold hover:underline"&amp;gt;
                   {article.title}
                 &amp;lt;/h3&amp;gt;
               &amp;lt;/Link&amp;gt;

               {article.summary &amp;amp;&amp;amp; (
                 &amp;lt;p className="text-sm text-muted-foreground line-clamp-3"&amp;gt;
                   {article.summary}
                 &amp;lt;/p&amp;gt;
               )}

               &amp;lt;p className="text-xs text-muted-foreground"&amp;gt;
                 {new Date(article.created_at).toDateString()}
               &amp;lt;/p&amp;gt;
             &amp;lt;/CardContent&amp;gt;
           &amp;lt;/Card&amp;gt;
         ))}
       &amp;lt;/div&amp;gt;
     &amp;lt;/div&amp;gt;
   );
 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It links as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When a user searches, the frontend updates the query.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After clicking search, handleSearch() runs, then the API runs &lt;code&gt;/api/search?q=...&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;backend runs Postgres full-text search&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Results are returned as JSON, and the UI is updated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Optional: live search&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can make search behave like Google search. It starts searching immediately a user starts by typing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Input
  value={query}
  onChange={async (e) =&amp;gt; {
    const value = e.target.value;
    setQuery(value);

    if (!value.trim()) {
      setResults([]);
      return;
    }

    setLoading(true);
    const res = await fetch(
      `/api/search?q=${encodeURIComponent(value)}`
    );
    const data = await res.json();
    setResults(data);
    setLoading(false);
  }}
  placeholder="Search articles..."
/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check a live demo here&lt;/p&gt;

&lt;p&gt;This search can also be expanded to use Algolia/Typesense.&lt;/p&gt;

&lt;p&gt;Thank you for taking time to go through the article.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
