<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Xata</title>
    <description>The latest articles on Forem by Xata (@xata).</description>
    <link>https://forem.com/xata</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/xata"/>
    <language>en</language>
    <item>
      <title>PostgreSQL Branching: Xata vs. Neon vs. Supabase - Part 2</title>
      <dc:creator>AlexF</dc:creator>
      <pubDate>Mon, 30 Jun 2025 18:58:59 +0000</pubDate>
      <link>https://forem.com/xata/postgresql-branching-xata-vs-neon-vs-supabase-part-2-37h3</link>
      <guid>https://forem.com/xata/postgresql-branching-xata-vs-neon-vs-supabase-part-2-37h3</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://xata.io/blog/neon-vs-supabase-vs-xata-postgres-branching-part-1" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt; we dissected how each platform branches a Postgres database under the hood. This post zooms in on pricing. We’ll explore how &lt;a href="https://xata.io/" rel="noopener noreferrer"&gt;Xata&lt;/a&gt;, &lt;a href="https://neon.com/" rel="noopener noreferrer"&gt;Neon&lt;/a&gt;, and &lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; pricing strategies stack up when you require:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CI preview databases for a five engineer team&lt;/li&gt;
&lt;li&gt;HA deployment with two read replicas + always-on staging&lt;/li&gt;
&lt;li&gt;Per-tenant isolation for SaaS offerings&lt;/li&gt;
&lt;li&gt;Data-science sandboxes for model training&lt;/li&gt;
&lt;li&gt;Autonomous agents with high branch churn&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pricing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the first questions any team asks when evaluating platforms that are critical to their infrastructure is “how much is this actually going to cost us?” Since each service has a different pricing model, this won't be an exact apples-to-apples comparison. I’ll break down the pricing for Xata, Neon, and Supabase, then provide some scenario-based comparisons to illustrate the differences. &lt;strong&gt;Please note that the pricing and packaging of each solution were collected on June 25, 2025.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Neon vs. Supabase vs. Xata
&lt;/h3&gt;

&lt;p&gt;Focusing on branching, here’s a quick breakdown of the pricing plans.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Plans&lt;/th&gt;
&lt;th&gt;What’s Included&lt;/th&gt;
&lt;th&gt;Typical Instance Rates&lt;/th&gt;
&lt;th&gt;Overage / Extras&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Xata&lt;/td&gt;
&lt;td&gt;Pay-as-you-go&lt;/td&gt;
&lt;td&gt;Unlimited branches, pay for vCPU-hrs &amp;amp; GB-months only&lt;/td&gt;
&lt;td&gt;Example: xata.small (2 vCPU / 2 GB) → $0.024 hr  xata.xlarge (4 vCPU / 16 GB) → $0.192 hr&lt;/td&gt;
&lt;td&gt;Storage $0.30 GB / mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Neon&lt;/td&gt;
&lt;td&gt;Launch $19 / mo Scale $69 / mo Business $700 / mo&lt;/td&gt;
&lt;td&gt;Launch: 300 CPU-hrs &amp;amp; 10 GB Scale: 750 CPU-hrs &amp;amp; 50 GB Business: 1k CPU-hrs &amp;amp; 500GB storage&lt;/td&gt;
&lt;td&gt;Compute charged in CPU-hrs Example: 2 vCPU for 1 hr = 2 CPU-hrs&lt;/td&gt;
&lt;td&gt;CPU overage $0.16 per CPU-hr Storage overage $0.50–1.75 GB / mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supabase&lt;/td&gt;
&lt;td&gt;Pro $25 / mo Team $599 / mo&lt;/td&gt;
&lt;td&gt;$10 compute credit Pro: 1 Micro instance (2 CPU / 1 GB) &amp;amp; 8 GB storage Team: higher limits + support&lt;/td&gt;
&lt;td&gt;Add-on instances: Micro $10 / mo Small $15 / mo Medium $60 / mo Large $110 mo XL $210 / mo&lt;/td&gt;
&lt;td&gt;Instance add-ons charged by the month Defaults to Micro rate of $0.01344 / h Storage overage $0.125 GB /mo&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Xata&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We aimed for simple, transparent &lt;a href="https://xata.io/pricing" rel="noopener noreferrer"&gt;pricing&lt;/a&gt;. Xata’s pay-as-you-go model charges for compute instance hours and storage GB months, period. There are predefined instance sizes with a certain vCPU and RAM combination and an hourly rate for each. For example, a small dev instance (2 vCPUs, 2 GB RAM) is $0.024 per hour (~$18/month if running 24/7). Storage is $0.30 per GB per month, charged only on actual data usage since we don’t pre-allocate disk. All the additional features are included at no extra cost. We don’t charge separately for number of branches or anything like that. You can create unlimited branches and you’ll just pay for the storage difference they incur and any compute hours if those branches are spun up and running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Neon
&lt;/h3&gt;

&lt;p&gt;Neon uses a &lt;a href="https://neon.com/pricing" rel="noopener noreferrer"&gt;tiered plan model&lt;/a&gt; with a mix of free quotas and pay-as-you-go for overages. For instance, Neon’s Launch plan is $19/month which includes 300 hours of compute and 10 GB of storage. The Scale plan is $69/month and includes 750 compute hours and 50 GB storage. If you exceed the included hours or storage, you pay overage fees. Neon measures compute in “compute hours” which are basically CPU-hours; e.g., running a 2 CPU instance for 1 hour consumes 2 compute hours. Overage compute hours are billed at $0.16 per CPU-hour beyond the included amount. Storage overages are around $1.50–$1.75 per GB-month depending on tier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supabase
&lt;/h3&gt;

&lt;p&gt;Supabase’s pricing has two components: &lt;a href="https://supabase.com/pricing" rel="noopener noreferrer"&gt;the base plan fee and usage add-ons&lt;/a&gt;. Their Pro plan is $25/month base, which includes certain limits (like 8 GB database space, 100GB file storage, etc.) and importantly includes $10 of compute credits. Those credits cover a certain amount of database instance time. Supabase essentially offers fixed-size instance add-ons: a micro instance (roughly 2 CPU, 1GB RAM on ARM) costs $10/month, Small is $15, Medium $60, Large $110, etc. The $10 credit in the base plan means you can run one micro instance included. If you upgrade your database to a larger size or add more branches, you incur additional instance charges. Each branch environment is treated like an additional instance in terms of billing, and you’re only charged for the hours it’s actually running. The base $25 plan’s $10 credit does not apply to branch instances, it only covers the main project instance. So if you spin up branches, those consume compute credits separately (about $0.32 per day on a micro instance).&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost scenarios
&lt;/h2&gt;

&lt;p&gt;To make the analysis concrete, let’s compare costs in a few different theoretical scenarios. Please note that all scenarios are estimates, actual bills will vary. These scenarios assume certain architectural decisions and also do not take into account enterprise level discounts provided at certain scales for each offering.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenario 1: CI preview databases for a five engineer team&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this scenario, you have a production Postgres (~10 GB data) and a dev workflow where 5 developers frequently create isolated databases for feature work. Each dev branch is used ~3 hours per weekday for active development or running CI tests, then torn down. Additionally, you maintain a long-lived staging environment that is refreshed with production data nightly for final QA. Let’s compare monthly costs for enabling this on each platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Cost Breakdown&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata (Pay-as-you-go)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon (Scale plan)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase (Pro plan)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Baseline env (staging only*)&lt;/td&gt;
&lt;td&gt;~$21. &lt;code&gt;xata.small&lt;/code&gt; (2 vCPU / 2 GB) 24×7 ($18) + 10 GB storage ($3).&lt;/td&gt;
&lt;td&gt;~$180. Persistent 2 vCPU staging instance; exceeds 750 CPU-hr quota, so ~$113 overage on top of the $69 base.&lt;/td&gt;
&lt;td&gt;~$25 - 30. $25 Pro plan covers one always-on Micro instance for staging; add +$5 if you upsize staging to a Small ($15/mo, first $10 covered by plan credits).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dev-branch usage (5 devs × 3 h/day)&lt;/td&gt;
&lt;td&gt;~$4. 330h on micro instances ($0.012 / h).&lt;/td&gt;
&lt;td&gt;~$5. 330 CPU-h of overage at ~$0.016 / CPU-h.&lt;/td&gt;
&lt;td&gt;~$4.44. 330 branch hours ($0.01344 / h).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Estimated monthly cost&lt;/td&gt;
&lt;td&gt;$25 - 30&lt;/td&gt;
&lt;td&gt;$180 - 185&lt;/td&gt;
&lt;td&gt;$29 - 34&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Staging on &lt;code&gt;xata.small&lt;/code&gt; runs $18 in compute plus $3 storage. Ephemeral dev branches add about $4 (minute-billed at $0.012 / h) and negligible storage, keeping the bill around $25 – 30/mo. Copy-on-write means you never pay to duplicate the 10 GB dataset. Granular billing also lets you leave a branch up for an unusually long CI run without being penalized by hourly rounding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Scale plan’s 750 compute-hour quota is mostly eaten by the always-on staging node (1,460 CPU-h), so 710h spill into overage at $0.16 / h. Add the $69 subscription and you arrive at about  $183/mo. The short-lived dev branches fit inside what’s left of the allowance. Any spike in staging load or an extra CPU added for performance pushes more hours into overage, so monthly spend can fluctuate noticeably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Pro plan’s $25 base buys you one always-on Micro instance, which is perfect for staging. Dev branch compute is only about $4.44 (330h × $0.01344/h). With that surcharge, monthly spend climbs to about $29–34. Keeping branch counts low or recycling a single shared preview databases are some ways to avoid this steady drip.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 2: HA deployment with two read replicas + always-on staging
&lt;/h3&gt;

&lt;p&gt;To meet strict uptime and guarantee zero-downtime fail-over with realistic testing environments you’ll want to run one primary writer and two read replicas (three AZs total) with an always-on staging branch. This is a pretty common production setup. In our scenario, every node is an 8 vCPU / 32 GB instance holding the same 100 GB dataset. All four nodes will stay online 24 × 7.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Cost Breakdown&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata (Pay-as-you-go)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon (Business plan)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase (Team plan)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary + 2 replicas + staging (4 × 8 vCPU)&lt;/td&gt;
&lt;td&gt;~$1,121. &lt;code&gt;xata.2xlarge&lt;/code&gt; $0.384/h × 4 × 730h&lt;/td&gt;
&lt;td&gt;~$4,278. $700 plan + (23,360 CPU-h − 1,000 incl.) × $0.16&lt;/td&gt;
&lt;td&gt;~$2,240. $599 plan + 4 × 2XL compute ($0.562/h)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;~$30. 100 GB × $0.30 / GB-mo&lt;/td&gt;
&lt;td&gt;In-plan (≤500 GB included)&lt;/td&gt;
&lt;td&gt;~$49. 400 GB total - 8 GB quota × $0.125/GB-mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Estimated monthly cost&lt;/td&gt;
&lt;td&gt;$1,151&lt;/td&gt;
&lt;td&gt;$4,278&lt;/td&gt;
&lt;td&gt;$2,289&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Four always-on &lt;code&gt;xata.2xlarge&lt;/code&gt; nodes run about $1,121/mo in compute. With copy-on-write, the primary, two replicas, and staging branch all reference the same 100 GB dataset. Adding only  $30 for storage costs. The total cost ends up being $1,151/mo. Fail-over or promotion is simply a metadata flip and recovery time is nearly instant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each 8-vCPU endpoint burns 5,840 CPU-h per month. With four nodes that is 23,360 CPU-h. After the Business plan’s 1,000 CPU-h allowance, 22,360 CPU-h are billed at $0.16 ($3,578), plus the $700 subscription. Storage is CoW-backed, so the single 100 GB copy stays inside the 500 GB quota of the Business plan, keeping the bill at around $4.3k/mo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The primary, two read replicas and the staging branch all run on 2 XL compute add-ons (8 vCPU / 32 GB, $0.562 / h). Four nodes for a full month total about $1,641 in compute. Adding the $599 Team-plan base fee and storage for four physical copies (400 GB billed at $49) brings the cluster to about $2.3k per month. Because Supabase stores full files per database, storage grows linearly with every replica and the staging branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenario 3: Per-tenant isolation for SaaS offerings&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A common SaaS enterprise pattern is tenant isolation, also described as having a database-per-customer. In this scenario, we assume 200 isolated customer branches, each holding about 10 GB of data and running on smaller instances (2 vCPU / 1 GB).&lt;/p&gt;

&lt;p&gt;In this scenario, branches are live 9 AM – 5 PM on work days (~ 176h/month), then hibernate outside of those hours. Storage is always allocated, so the fleet’s overall footprint is about 2 TB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Cost Breakdown&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata (Pay-as-you-go)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon (Business plan)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase (Team plan)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tenant compute (200 × 176h)&lt;/td&gt;
&lt;td&gt;~$422. 35,200h × $0.012 / h (&lt;code&gt;xata.micro&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;~$6,172. 35,200 CPU-h: 1,000 incl. = 34,200h × $0.16 + $700 base&lt;/td&gt;
&lt;td&gt;~$1,072. 35,200h × $0.01344 / h + $599 base&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage (~2 TB total)&lt;/td&gt;
&lt;td&gt;$600. 2,000 GB × $0.30 / GB-mo&lt;/td&gt;
&lt;td&gt;~$750. 1.5 TB over 500 GB allowance × $0.50 / GB-mo&lt;/td&gt;
&lt;td&gt;~$249. 1,992 GB over 8 GB × $0.125 / GB-mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Estimated monthly cost&lt;/td&gt;
&lt;td&gt;$1,022&lt;/td&gt;
&lt;td&gt;$6,922&lt;/td&gt;
&lt;td&gt;$1,322&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We bill strictly for what’s used. When a tenant branch sleeps, its compute cost drops to zero. Only storage continues at $0.30 / GB-mo. With all 200 customer databases active during the day, the PostgreSQL fleet’s compute adds up to about $422. Copy-on-write means each branch stores just the pages it changes, keeping the 2 TB footprint flat. As a result, for roughly $1k/month you can have 200 isolated tenants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every active vCPU is metered. Running even a single-CU compute for 176 hours racks up 176 CPU-h. Multiply that by 200 branches and it quickly adds up to 35,200 CPU-h. After the 1,000h allowance the overage alone is $5.5k, plus the $700 plan fee. Storage beyond 500 GB is another $750. Scale-to-zero helps overnight, but eight hours a day across 200 databases still adds up, landing the total near $7k/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Branches are billed exactly like any other database, defaulting to $0.01344/h on the Micro instance. The fleet’s 35,200h therefore costs about $474 per month. Assuming certain features and support are required for this use case, add the $599 Team plan and about $249 for disk. The monthly total is roughly $1.3k.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenario 4: Data-science sandboxes for model training&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this scenario, five analysts spin up 2 TB clones on 32 vCPU / 128 GB nodes for supervised fine-tuning runs against their models one day each week. Each clone runs about 8 hours every Monday (~32 h/mo), so the platform logs 160 instance-hours a month. We estimate the storage growth by assuming the training job rewrites about 10% of the source pages (roughly 200 GB per branch). Multiplied by five analysts that yields to about 1 TB of new data for training. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Cost Breakdown&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata (Pay-as-you-go)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon (Business plan)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase (Team plan)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Model-training (32 vCPU × 160h)&lt;/td&gt;
&lt;td&gt;~$246. &lt;code&gt;xata.8xlarge&lt;/code&gt; at $1.536 / h × 160h&lt;/td&gt;
&lt;td&gt;~$1,359. $700 plan + 5,120 CPU-h – 1,000 incl. = 4,120 × $0.16&lt;/td&gt;
&lt;td&gt;~$410. 8 XL compute at $2.562 / h × 160h&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage overhead (2 TB base + 1 TB deltas)&lt;/td&gt;
&lt;td&gt;~$900. 3 TB total × $0.30/GB-mo&lt;/td&gt;
&lt;td&gt;~$1,250. 2.5 TB over 500 GB allowance × $0.50/GB-mo&lt;/td&gt;
&lt;td&gt;~$1,535. 12 TB total × $0.125 / GB-mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Estimated monthly cost&lt;/td&gt;
&lt;td&gt;$1,146&lt;/td&gt;
&lt;td&gt;$2,609&lt;/td&gt;
&lt;td&gt;$2,544&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You pay strictly for the 160 hours on &lt;code&gt;xata.8xlarge&lt;/code&gt;, about $246. Because the primary 2 TB production database also lives inside Xata, storage now totals 3 TB (2 TB base + 1 TB dirty pages) at $0.30/GB-mo, or roughly $900. Branches hibernate the remaining 672h each month, so there’s no idle-time compute charge. The total cost is about $1,146/mo, still linear to the minutes the big node is powered on and the pages that are actually modified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Business tier includes 1,000 CPU-hours and 500 GB storage for $700/mo. One 32-vCPU node burns 32 CPU per hour. Across 160h, that’s 5,120 CPU-h and 4,120h over that will be charged an overage cost $0.16 ($659). On top of this, you can add 2.5 TB of extra storage at $0.50/GB ($1,250), resulting at roughly $2.6k/mo for this use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each sandbox uses the 8 XL compute add-on ($2.562/h), so 160h costs $410. Supabase lacks copy-on-write for data pages, so each sandbox must hold a full 2 TB copy, bringing total stored data to 12 TB vs 3 TB like Xata and Neon. Add the $599 Team plan and $1.535k for 12 TB of storage, the total comes to $2.54 k/mo. Note that 12 TB exceeds the Team cap and an enterprise tier may be required, and multi-terabyte disk resizes add lead time before each training run.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenario 5: Autonomous agents with high branch churn&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In the last few weeks, we’ve seen the importance of databases in the AI agent race explode. With &lt;a href="https://www.databricks.com/company/newsroom/press-releases/databricks-agrees-acquire-neon-help-developers-deliver-ai-systems" rel="noopener noreferrer"&gt;Databricks recent acquisition of Neon&lt;/a&gt; and &lt;a href="https://www.snowflake.com/en/blog/snowflake-postgres-enterprise-ai-database/" rel="noopener noreferrer"&gt;Snowflakes recent acquisition of CrunchyData&lt;/a&gt;, it’s fair to say PostgreSQL is a well positioned database for the use case. In this scenario, an AI platform creates 1,000 ephemeral branches every day, each on a small instance (2 vCPU/1 GB).&lt;/p&gt;

&lt;p&gt;Because of the task oriented nature of the agents and speed of iteration, every branch lives only for about 30 min. In total, the PostgreSQL fleet logs 15,000 instance hours per month. In this hypothetical, the working set is tiny (20 GB total), and branches are dropped as soon as the agent finishes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Cost Breakdown&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata (Pay-as-you-go)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon (Scale plan)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase (Pro plan)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Agent compute (15,000h)&lt;/td&gt;
&lt;td&gt;~$180. 15,000h × $0.012 / h (&lt;code&gt;xata.micro&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;~$2,349. $69 plan + (15,000 CPU-h - 750 incl.) × $0.16&lt;/td&gt;
&lt;td&gt;~$418. $25 plan - $10 credit + 30,000 billable h × $0.01344&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage (20 GB)&lt;/td&gt;
&lt;td&gt;~$6. 20 GB × $0.30 / GB-mo&lt;/td&gt;
&lt;td&gt;Covered by the plan&lt;/td&gt;
&lt;td&gt;$1.5. (20 GB – 8 GB) × $0.125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Estimated monthly cost&lt;/td&gt;
&lt;td&gt;$186&lt;/td&gt;
&lt;td&gt;$2,355&lt;/td&gt;
&lt;td&gt;$420&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Billing is minute-level at Xata. Each 30-minute micro branch costs $0.006. Aggregate that over the month and it’s about $180 in compute. Because branches are copy-on-write, the 20 GB hot set is only stored once, adding  only $6. There are no control-plane limits throttle the 1,000-per-day create / delete cycle. In total, it’s about $186/mo for effectively unlimited amounts of ephemeral databases for your agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Scale plan includes 750 CPU-h and 50 GB storage for $69. The workload uses 15,000 CPU-h, which after the 750h included, 14,250h overage at $0.16 adds $2 280. Bringing the bill to $2.35k. Storage fits inside the quota, and scale-to-zero doesn’t help because branches expire after 30 min.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Branches are metered by the hour and rounded up. 30,000 billable hours × $0.01344 brings us to $403 for compute. Subtract the Pro plan’s $10 compute credit, add the $25 subscription fee and $1.5 for 12 GB extra storage the monthly total comes to just under $420.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing analysis conclusion
&lt;/h2&gt;

&lt;p&gt;Across all five scenarios the headline prices turn out to be only part of the story. And realistically, at certain scales discounts would come into play or different architectural decisions would be made. At the end of the day, Xata’s pay-as-you-go model remains linear. You add up vCPU-minutes and the copy-on-write storage actually in use. A 2 vCPU dev instance always costs $0.024h and an 8 vCPU production instance always costs $0.384h, no matter how many branches you keep around. Neon, by contrast, bundles a fixed pool of compute hours with each plan. Once a project’s vCPU-hours cross that quota, every extra hour is billed at $0.16 per CPU-hour. Supabase sets a low entry price for compute, but every branch is rounded up to the next full hour and large fleets for production-like data sets require full physical storage.&lt;/p&gt;

&lt;p&gt;Those hidden costs quickly become the surprise line-items in your monthly invoice. In Neon, keeping a single 8-vCPU database online 24×7 burns 5,840 CPU-hours a month, over five times the Scale plans allowance. Overage quickly eclipses the base fee in long-running, high-CPU workloads. With Supabase, ephemeral use cases are rounded up to the hour and if there are a lot of them, can double your cost. Because Xata has neither quota overages nor per-branch levies, its costs stay proportional to the resources you use, making final invoices more predictable when workloads scale up or branch counts explode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping it up
&lt;/h2&gt;

&lt;p&gt;Branching a PostgreSQL database has officially moved from an interesting idea to being common in daily developer workflows. Xata, Neon, and Supabase each prove the pattern works and is reshaping the way developers and agents build, test, and ship software. With branching, the old bottlenecks of one shared developer database or the risk of production data leaking into your staging environment can disappear. Teams get parallel feature work, reproducible CI pipelines, and more efficient release practices.&lt;/p&gt;

&lt;p&gt;Where the platforms diverge is in how their architectures convert that experience into dollars and developer experience. Xata keeps costs linear to the resources you actually hold open, Neon prioritizes burstable, serverless efficiency for short-lived workloads, and Supabase offers an all-in-one stack at a predictable entry price. Each model suits a different style of workflow. When considering adopting branching, make sure your choice matches your reality and expectations.&lt;/p&gt;

&lt;p&gt;At Xata we spent a lot of time weighing those trade-offs and identifying who we’re building this platform for. Ultimately we landed on a vanilla-Postgres, copy-on-write storage layer that scales and improves performance, with an invoice that’s easily predictable. If that sounds interesting to you, we’re currently in private beta onboarding new teams every day. You can request access &lt;a href="https://xata.io/get-access" rel="noopener noreferrer"&gt;here&lt;/a&gt; or drop by our &lt;a href="http://xata.io/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; to chat.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>PostgreSQL Branching: Xata vs. Neon vs. Supabase - Part 1</title>
      <dc:creator>AlexF</dc:creator>
      <pubDate>Wed, 25 Jun 2025 16:24:42 +0000</pubDate>
      <link>https://forem.com/xata/postgresql-branching-xata-vs-neon-vs-supabase-part-1-3jg4</link>
      <guid>https://forem.com/xata/postgresql-branching-xata-vs-neon-vs-supabase-part-1-3jg4</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As head of product at Xata, I’ve had a front-row seat into the recent (r)evolution of Postgres and the rise of database branching these last few years. Functionally, this capability mirrors the branching behavior in git allowing you to quickly clone or fork your data and schema alongside your application feature branch. With modern development practices and the speed at which agents are being adopted to build applications, the ability to branch a database has moved from a nice-to-have to a must-have.&lt;/p&gt;

&lt;p&gt;Multiple platforms have emerged to tackle this, each with a unique approach. &lt;a href="https://neon.com/" rel="noopener noreferrer"&gt;Neon&lt;/a&gt; popularized the idea of copy-on-write branches in Postgres, letting teams spin up full data copies in seconds. &lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; integrated database branching with git workflows, providing full-stack preview environments for every feature branch. With our new PostgreSQL platform at &lt;a href="https://xata.io/" rel="noopener noreferrer"&gt;Xata&lt;/a&gt;, we’ve reimagined branching from the ground up to address the limitations we saw in existing solutions.&lt;/p&gt;

&lt;p&gt;In this blog series I’ll compare the features, architecture and cost for Xata, Neon, and Supabase implementations. Because I go into a bit of detail, I’ll start each section with a comparison table and TL;DR for those of you that want the cliff notes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Features&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we’ll compare the different database branching and related features that each platforms supports.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;TL;DR&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;All three platforms recognize the value of database branching, but they differ in both features and philosophy. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Xata&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Branching types&lt;/td&gt;
&lt;td&gt;Full schema + data (copy-on-write) branches&lt;/td&gt;
&lt;td&gt;Full schema + data (copy-on-write) branches&lt;/td&gt;
&lt;td&gt;Schema only branches&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PII and sensitive data&lt;/td&gt;
&lt;td&gt;Built-in anonymization masking of PII and sensitive data&lt;/td&gt;
&lt;td&gt;Masking possible through extensions or scripts&lt;/td&gt;
&lt;td&gt;Masking is possible through seed scripts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Isolation and creation time&lt;/td&gt;
&lt;td&gt;Fully isolated and instant branches regardless of data size&lt;/td&gt;
&lt;td&gt;Fully isolated and instant branches regardless of data size&lt;/td&gt;
&lt;td&gt;Fully isolated and branch creation time dependent on seed scripts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Merging schema changes&lt;/td&gt;
&lt;td&gt;Built-in, zero-downtime merge of branch schema back to production&lt;/td&gt;
&lt;td&gt;No built-in merge back to production, requires external tooling&lt;/td&gt;
&lt;td&gt;No built-in merge back to production, requires external tooling and leans on code-as-truth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment flexibility&lt;/td&gt;
&lt;td&gt;Managed cloud offering, BYOC and on-premise supported, built on open-source solutions&lt;/td&gt;
&lt;td&gt;Managed cloud service only, core offering open source managed by you&lt;/td&gt;
&lt;td&gt;Managed cloud service only, core offering open source managed by you&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redundancy and HA&lt;/td&gt;
&lt;td&gt;At both Postgres level and at storage level&lt;/td&gt;
&lt;td&gt;At storage level only&lt;/td&gt;
&lt;td&gt;At Postgres level only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compatibility&lt;/td&gt;
&lt;td&gt;Unmodified Postgres&lt;/td&gt;
&lt;td&gt;Modified Postgres&lt;/td&gt;
&lt;td&gt;Unmodified Postgres&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here’s a more in-depth overview of how Xata, Neon, and Supabase compare in these feature categories.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Branching types&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Xata and Neon support instant copy-on-write branching that duplicates both schema and data. This means a new branch starts as a complete copy of the parent database’s data without physically copying it. Supabase’s branching, on the other hand, is currently schema-only. New branches include the schema and run migrations, but no data is copied from the parent. Supabase currently requires you to provide a seed script for the data. That could be random sample data or pulled from an existing database.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;PII and sensitive data&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Xata’s pipeline removes PII before branches exist. We use &lt;code&gt;pgstream&lt;/code&gt; to replicate production into an internal staging replica, applying masking / anonymization rules during the initial snapshot and every subsequent WAL change. Because the replica already contains only scrubbed data, any branch you spin off that replica inherits the same protection automatically with no risk of an engineer seeing real emails or SSNs or PII ever leaving your prod environment.&lt;/p&gt;

&lt;p&gt;Neon and Supabase have no built-in masking. Neon users typically install the open-source &lt;code&gt;pg_anonymizer&lt;/code&gt; extension on a staging database and script a &lt;code&gt;pg_dump&lt;/code&gt; → &lt;code&gt;pg_restore&lt;/code&gt; workflow.  It works whether production is on Neon or elsewhere but still requires exporting sensitive data out of production first. Supabase offers schema-only branches, so teams either rely on synthetic seed data or build a similar dump/anonymize/import process themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Isolation and creation time&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On all platforms, branch creation is effectively instant and the child branch is fully isolated from the parent. In Xata and Neon, this is achieved by copy-on-write at the storage layer. The new branch initially references the same data, and only deviates when writes occur. This means branch creation time does not depend on database size at all. A 1 TB database can be branched instantly on both Xata and Neon. A branch in Supabase depends on how fast migrations run and seed data loads. You pay the cost of applying DDL and inserting seed rows, which can take minutes to hours depending on your setup. Once created, changes in a branch don’t affect the parent. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Merging schema changes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Branching is only half the battle, eventually you want to merge changes to your schema back to the parent branch. Xata provides an integrated schema migration workflow that works with branches to facilitate zero-downtime schema changes (leveraging &lt;code&gt;pgroll&lt;/code&gt;). Neither Neon nor Supabase offer an out-of-the-box solution for merging branch changes. With Neon, you typically apply schema migrations to the primary database manually (or with your migration tool / ORM) after testing in a branch. Supabase’s model relies heavily on your git workflow. When you merge your code branch to main, any new database migrations in that branch are automatically run on the production database. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deployment flexibility&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Xata can be run as a managed cloud service, in your own cloud (BYOC) or on-premises. Our new platform is cloud-agnostic. This optionality means you can keep data in-house or in-region for compliance purposes. It also allows you to use your existing cloud credits or pre-committed spend. The platform is built entirely from open-source solutions. Neon is a fully managed cloud service with its core technology open-sourced. Supabase is also a managed service with its core service open-sourced. Both solutions can be run in a self-managed fashion if you are willing to run, manage and support the infrastructure yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redundancy and HA
&lt;/h3&gt;

&lt;p&gt;Xata offers double the protection at both the Postgres and storage layer with a multi-AZ Postgres replica set handled by CloudNativePG and an erasure-coded Simplyblock storage cluster. If either a compute pod or an entire zone fails, a new pod mounts the same copy-on-write volume and traffic resumes in seconds with no data loss. Neon concentrates its HA logic in the storage engine. WAL is synchronously streamed to Safekeepers across zones and rebuilt by the Pageserver, so the data is always durable. If a stateless compute endpoint dies you just spin up another one, but in-flight sessions are dropped. Supabase has simple Postgres semantics with each branch having its own VM/container. You can add a read replica or enable PITR, yet fail-over and cold-start behavior remain per-branch responsibilities, making HA simple but largely self-managed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Compatibility&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Xata’s platform runs vanilla PostgreSQL with no proprietary forking, it supports all Postgres extensions and features out of the box. In contrast, Neon’s approach involves a custom storage engine and components, meaning they maintain a modified Postgres backend to integrate with that system. As a result, extensions that assume low level disk access might require modifications. Similar to Xata, Supabase operates standard Postgres, so most extensions that don’t conflict with their platform can be used. From a developer’s standpoint, all three should &lt;em&gt;feel like&lt;/em&gt; Postgres, but when you’re 100% compatible you’ll never hit a weird edge-case of “oh that extension isn’t supported”.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Underneath the hood, Xata, Neon, and Supabase employ very different architectures to achieve database branching. The design decisions at this level have big implications for performance, reliability, and cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;Here’s a quick overview of the different ways each platform approached branching in PostgreSQL.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Platform&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Branch creation&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://xata.io/blog/xata-postgres-with-data-branching-and-pii-anonymization" rel="noopener noreferrer"&gt;Xata&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Stateless Postgres pods in Kubernetes via &lt;a href="https://github.com/cloudnative-pg/cloudnative-pg" rel="noopener noreferrer"&gt;CNPG&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NVMe-oF block-storage cluster (&lt;a href="https://www.simplyblock.io/" rel="noopener noreferrer"&gt;Simplyblock&lt;/a&gt;)&lt;/td&gt;
&lt;td&gt;Controller writes a new block-index, CoW at block level&lt;/td&gt;
&lt;td&gt;100 % vanilla PG, BYOC/on-prem, NVMe-class latency, built-in PII masking&lt;/td&gt;
&lt;td&gt;Requires dedicated storage cluster; scale-to-zero in active development.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://neon.com/docs/introduction/architecture-overview" rel="noopener noreferrer"&gt;Neon&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Stateless compute streams WAL to Safekeepers to Pageserver&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage in S3&lt;/td&gt;
&lt;td&gt;New timeline at an LSN, CoW at page level via WAL&lt;/td&gt;
&lt;td&gt;Instant branches/PITR, scale-to-zero, active OSS community&lt;/td&gt;
&lt;td&gt;Extra network hop means higher tail latency, overage costs, no managed BYOC&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://supabase.com/docs/guides/getting-started/architecture" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;One full Postgres + Auth + Edge stack per branch&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Local disk per VM/container.&lt;/td&gt;
&lt;td&gt;Forks schema, runs migrations, optional manual seed script&lt;/td&gt;
&lt;td&gt;Simple “just Postgres”, whole backend cloned (auth, storage, funcs) for branches&lt;/td&gt;
&lt;td&gt;Schema-only by default, heavy per-branch resources, cold-start after auto-pause, slow &amp;amp; costly to copy large datasets.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s the 10,000 foot view, let’s go a bit deeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Xata&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Xata’s new platform uses a decoupled storage and compute architecture similar in spirit to Aurora, but with a critical difference. We do it strictly at the storage layer, without modifying Postgres itself. On the compute side, we run unmodified PostgreSQL instances in containers orchestrated by Kubernetes, using the &lt;a href="https://github.com/cloudnative-pg/cloudnative-pg" rel="noopener noreferrer"&gt;CloudNativePG&lt;/a&gt; operator for high availability and failover. On the storage side, we’ve partnered with &lt;a href="https://www.simplyblock.io/" rel="noopener noreferrer"&gt;Simplyblock&lt;/a&gt; to provide a distributed block storage cluster accessible over NVMe-oF (NVMe over Fabrics).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7udgiwcrnb0k0i5ie93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7udgiwcrnb0k0i5ie93.png" alt="Xata architecture" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In practical terms, when a Xata Postgres instance writes to disk, it’s writing to a virtual NVMe volume backed by our storage cluster. That storage cluster handles replication, durability, and importantly copy-on-write branching at the block level. We mount and unmount these virtual volumes dynamically for each branch and database.&lt;/p&gt;

&lt;p&gt;The copy-on-write mechanism works roughly like this. Our storage cluster breaks the database into chunks (data blocks). When you create a new branch, it creates a new metadata index that initially points to all the same data blocks as the parent. No actual data is copied, so it’s instantaneous. If neither the parent nor child branch makes any writes, they remain two “views” of the same underlying data. When a write does occur, the affected chunk is copied and written to a new location, and the child’s index now points to that new chunk. The parent keeps pointing at the original chunk. This is textbook copy-on-write (CoW). Only diverging data is duplicated, and only at first write. Most chunks remain shared as long as they aren’t modified, saving a ton of space when branches are short-lived or only lightly edited (which has become quite a common use case with AI agents).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeqwm9eil2a4f938dh7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeqwm9eil2a4f938dh7s.png" alt="CoW Branching" width="800" height="1222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benefit of Xata’s approach is that it behaves exactly like a normal Postgres instance, because it’s just Postgres, with CoW branching and elastic storage behind the scenes. We didn’t have to fork or alter Postgres to do this, and instead innovated at the storage layer and in orchestration. Our Postgres instances see what looks like a very fast disk. The heavy lifting of making that disk CoW-capable, highly redundant, and network-accessible is handled by the Simplyblock storage layer. We chose this path so that we maintain 100% Postgres compatibility (extensions, planner behavior, etc.) and can take advantage of the huge ecosystem around Postgres.&lt;/p&gt;

&lt;p&gt;From a performance standpoint, Xata’s storage architecture is built for low-latency and high throughput. The storage nodes bypass the kernel using user-space drivers via SPDK (Storage Performance Development Kit) to squeeze every drop of performance from NVMe drives. Data is synchronously replicated across nodes with an erasure-coding scheme for fault tolerance (think RAID, but distributed at cluster level). In plain English: it’s fast and durable. We’ve benchmarked it and, even with a network hop, observed throughput and latency very close to well-tuned local-NVMe setups and markedly faster than typical cloud block storage (e.g., EBS/RDS) or modern serverless solutions thanks to these optimizations. We plan to publish these benchmarks soon. &lt;/p&gt;

&lt;p&gt;Because the Xata stack has been built with a cloud-agnostic architecture that is out-of-the-box Postgres on Kubernetes with a self-contained storage layer, we can offer Bring-Your-Own-Cloud deployments with minimal lift. Our control plane can launch the Postgres compute nodes in your cloud account or bare metal servers, attaching to the Xata storage cluster over the network. The data stays in your environment and never traverses the public internet. This is a big advantage in enterprise scenarios where this level of scrutiny matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Neon&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Neon can be credited for building the first open-source storage-compute separation design for Postgres. They implemented it by building a bespoke storage engine for Postgres. In Neon’s architecture, when a Postgres compute node runs, it does not write to local disk. Instead, it streams all its WAL (write-ahead log) to a service called Safekeepers. Safekeepers are like a durable WAL buffer. They accept the WAL entries and replicate them  across availability zones to ensure they’re not lost. Meanwhile, another component called the Pageserver is responsible for actual data storage. Pageservers consume the WAL stream from Safekeepers and apply it to a database page store. They keep recently used pages in memory and also store data on disk or in cloud object storage as the source of truth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfwnel05k8u8mt30s8ej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfwnel05k8u8mt30s8ej.png" alt="Neon architecture" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How does Neon achieve branching? They use a non-overwriting storage format in the pageserver. Essentially copy-on-write at the page/version level. When you create a branch in Neon, under the hood it’s like creating a new timeline starting at a specific WAL LSN (log sequence number). The pageserver doesn’t duplicate all pages for the new branch; it just starts a new sequence of WAL for it. Initially, the branch shares all history with its parent until a divergence occurs. Only when new WAL comes in (writes on the branch) do new page versions get written. This is effectively the same outcome as Xata’s copy-on-write, just implemented via a logical WAL timeline mechanism rather than at the block device layer.&lt;/p&gt;

&lt;p&gt;One effect of Neon’s approach is that the Postgres compute nodes are stateless. If a compute node crashes or is stopped (say due to inactivity), you can later spin up a new compute node, point it at the appropriate timeline (branch), and it will retrieve whatever pages it needs from the pageserver. This is how Neon achieves scaling to zero and fast cold starts. The database state persists in the pageserver/S3, and a compute node can be brought up on demand to service queries, then shut down when idle.&lt;/p&gt;

&lt;p&gt;The trade-off is that Neon’s architecture introduces additional indirection and complexity in the performance-critical data path (WAL network hops, page retrieval on cache miss). In other words, there is some inevitable overhead compared to a single-node Postgres with network attached storage. Particularly for very write-heavy workloads or very random OLTP reads that can’t all fit in memory. For many developer and test workloads, this overhead is might be acceptable. At larger scale or sustained high throughput, data retrieval might observe higher tail latencies due to that extra network/storage indirection. This additional complexity is one of the disadvantages of running a modified version of Postgres.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Supabase&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Supabase’s approach to branching is the simplest conceptually. Each branch is just a new Postgres instance (plus the accompanying services) created from scratch. When you create a branch in Supabase, the platform essentially provisions a new Postgres container (or VM) for that branch, sets up a blank database, runs your migration + seed scripts to create the schema and to populate data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7npdyh6l143c73d4rtf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7npdyh6l143c73d4rtf3.png" alt="Supabase architecture" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no special storage magic here like copy-on-write. If you want a branch to have production-like data, you have to script it. Today, that means writing a &lt;strong&gt;&lt;code&gt;seed.sql&lt;/code&gt;&lt;/strong&gt; that inserts sample rows or copying a subset of data from prod through your own processes. At the moment, Supabase explicitly does not clone your data into the branch.&lt;/p&gt;

&lt;p&gt;Because each branch is a full Postgres instance, they are fully isolated at the OS level (not just at data level). Aside from shared control plane components, one branch’s activity can’t impact another’s performance since they don’t share underlying storage or CPU. This is good for isolation, but it means branches are heavier weight. Supabase mitigates cost by auto-pausing these preview branches when not in use. Unless the branch is marked as persistent, you’ll incur a cold start after it’s been idle for &amp;gt; 5 minutes. You can mark the branch as persistent and flip between ephemeral and persistent options.&lt;/p&gt;

&lt;p&gt;Supabase branching is leveraging the standard Postgres stack and as a result, the performance on a branch is the same as performance on any Postgres instance of that size. There’s no extra overhead of remote storage or CoW bookkeeping. The trade-off is that branch creation is much slower if you wanted to copy large amounts of data. In practice, Supabase expects you to only seed a small amount of data that is enough for simple tests. That is fine for testing logic and migrations, but it’s not adequate if you want to, say, test a complex query on a full-size dataset or train your latest model on production-like data. In those cases, Xata’s or Neon’s approach of branching the actual data shines.&lt;/p&gt;

&lt;p&gt;Because each Supabase branch is a standalone Postgres, if your production DB is, say, 50 GB, and you wanted a full copy for staging, you’d need another 50 GB in storage for the branch. Xata and Neon would not double your storage unless you modified all that data in the branch. So for storage-intensive scenarios, Supabase’s model can be costlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Database branching isn’t a checklist feature, it’s a fresh take on how you should operate and interact with a database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Xata treats branches as light, block-level snapshots, so large teams can clone production-sized datasets safely and predictably.&lt;/li&gt;
&lt;li&gt;Neon re-architects Postgres for serverless elasticity and instant branches.&lt;/li&gt;
&lt;li&gt;Supabase keeps things familiar, favoring full stack branches with additional microservices around Postgres.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Features and architecture tell only half the story. What does this look like in the real world and how do these choices impact pricing and cost for customers? In part 2 we’ll translate architectures into invoices, walking through each pricing model and three common branching scenarios.&lt;/p&gt;

&lt;p&gt;If you’re a Postgres user and are branch-curious, I invite you to give Xata a try. We’re currently in private beta and onboarding new teams on a daily basis. You can sign up for early access &lt;a href="https://xata.io/get-access" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Feel free to &lt;a href="http://xata.io/discord" rel="noopener noreferrer"&gt;pop into our Discord&lt;/a&gt; to ask any questions or simply say hi.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>Writing an LLM Eval with Vercel's AI SDK and Vitest</title>
      <dc:creator>Richard Gill</dc:creator>
      <pubDate>Thu, 03 Apr 2025 10:53:06 +0000</pubDate>
      <link>https://forem.com/xata/writing-an-llm-eval-with-vercels-ai-sdk-and-vitest-4pfb</link>
      <guid>https://forem.com/xata/writing-an-llm-eval-with-vercels-ai-sdk-and-vitest-4pfb</guid>
      <description>&lt;h2&gt;
  
  
  The Xata Agent
&lt;/h2&gt;

&lt;p&gt;Recently we launched &lt;a href="https://github.com/xataio/agent" rel="noopener noreferrer"&gt;Xata Agent&lt;/a&gt;, an open-source AI agent which helps diagnose issues and suggest optimizations for PostgreSQL databases.&lt;/p&gt;

&lt;p&gt;To make sure that Xata Agent still works well after modifying a prompt or switching LLM models we decided to test it with an Eval. Here, we'll explain how we used &lt;a href="https://sdk.vercel.ai" rel="noopener noreferrer"&gt;Vercel's AI SDK&lt;/a&gt; and &lt;a href="https://vitest.dev" rel="noopener noreferrer"&gt;Vitest&lt;/a&gt; to build an Eval in TypeScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Agent with an Eval
&lt;/h2&gt;

&lt;p&gt;The problem with building applications on top of LLMs is that LLMs are a black box:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// 1 Trillion parameter LLM model no human understands&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Xata Agent contains multiple prompts and tool calls. How do we know that Xata Agent still works well after modifying a prompt or switching model?&lt;/p&gt;

&lt;p&gt;To 'evaluate' how our LLM system is working we write a special kind of test - an Eval.&lt;/p&gt;

&lt;p&gt;An &lt;a href="https://hamel.dev/blog/posts/evals/" rel="noopener noreferrer"&gt;Eval&lt;/a&gt; is usually similar to a system test or an integration test, but specifically built to deal with the uncertainty of making calls to an LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Eval run output
&lt;/h2&gt;

&lt;p&gt;When we run the Eval the output is a directory with one folder for each Eval test case.&lt;/p&gt;

&lt;p&gt;The folder contains the output files of the run along with 'trace' information so we can debug what happened.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./eval-run-output/
├── evalResults.json
├── eval_id_1
│   ├── evalResult.json
│   ├── human.txt
│   ├── judgeResponse.txt
│   └── response.json
├── eval_id_2
│   ├── evalResult.json
│   ├── human.txt
│   ├── judgeResponse.txt
│   └── response.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re using &lt;a href="https://sdk.vercel.ai/" rel="noopener noreferrer"&gt;Vercel's AI SDK&lt;/a&gt; to perform tool calling with different models. The &lt;code&gt;response.json&lt;/code&gt; files represent a full &lt;a href="https://sdk.vercel.ai/docs/reference/ai-sdk-core/generate-text#returns" rel="noopener noreferrer"&gt;response object&lt;/a&gt; from Vercel’s AI SDK. This contains everything we need to evaluate the Xata Agent’s performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Final text response&lt;/li&gt;
&lt;li&gt;Tool calls and intermediate ‘thinking’ the model does&lt;/li&gt;
&lt;li&gt;System + User Prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We then convert this to a human readable format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System Prompt:
You are an AI assistant expert in PostgreSQL and database administration.
Your name is Xata Agent.
...

--------
User Prompt: What tables do I have in my db?
--------
Step: 1

I'll help you get an overview of the tables in your database. I'll use the getTablesAndInstanceInfo tool to retrieve this information.

getTablesAndInstanceInfo with args: {}

Tool Result:
Here are the tables, their sizes, and usage counts:

[{"name":"dogs","schema":"public","rows":150,"size":"24 kB","seqScans":45,"idxScans":120,"nTupIns":200,"nTupUpd":50,"nTupDel":10}]
...

--------

Step: 2

Based on the results, you have one table in your database: `dogs`

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then built a custom UI to see all all these outputs in so we can quickly debug what happened in a particular Eval run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fule7g9be7a5e37j67fer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fule7g9be7a5e37j67fer.png" alt="LLM Eval Runner" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Vitest to run an Eval
&lt;/h2&gt;

&lt;p&gt;Vitest is a popular TypeScript testing framework. To create our desired folder structure we have to hook into Vitest in a few places:&lt;/p&gt;

&lt;h3&gt;
  
  
  Get an id for the Eval run
&lt;/h3&gt;

&lt;p&gt;To get a consistent id for each run of all our Eval tests we can set a &lt;code&gt;TEST_RUN_ID&lt;/code&gt; environment variable in Vitest’s globalSetup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;globalSetup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/evals/global-setup.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;randomUUID&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;globalSetup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TEST_RUN_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randomUUID&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then create and reference the folder for our eval run like this: &lt;code&gt;path.join('/tmp/eval-runs/', process.env.TEST_RUN_ID)&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get an id for an individual Eval
&lt;/h3&gt;

&lt;p&gt;Getting an id for each individual Eval test case is a bit more tricky.&lt;/p&gt;

&lt;p&gt;Since LLM calls take some time, we need to run Vitest tests in parallel using &lt;code&gt;describe.concurrent&lt;/code&gt; . But we must then use a local copy of the &lt;code&gt;expect&lt;/code&gt; variable from the test to &lt;a href="https://vitest.dev/api/#describe-concurrent" rel="noopener noreferrer"&gt;ensure the test name is correct&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can use the Vitest describe + test name as the Eval id:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vitest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;concurrent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;judge&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eval_id_1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// note: we must use a local version of expect when running tests concurrently&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fullEvalId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getEvalId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getEvalId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ExpectStatic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;testName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getState&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;currentTestName&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;testNameToEvalId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;testNameToEvalId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;testName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Expected testName to be defined&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;testName&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; &amp;gt; &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;_&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here it’s pretty straightforward to create a folder like this: &lt;code&gt;path.join('/tmp/eval-runs/', process.env.TEST_RUN_ID, testNameToEvalId(expect))&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Combining the results with a Vitest Reporter
&lt;/h3&gt;

&lt;p&gt;We can use Vitest’s &lt;a href="https://vitest.dev/advanced/reporters.html" rel="noopener noreferrer"&gt;reporters&lt;/a&gt; to execute code during/after our test run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TestCase&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vitest/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Reporter&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vitest/reporters&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;testNameToEvalId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./lib/test-id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EvalReporter&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;Reporter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;onTestRunEnd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;traceFolder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/tmp/eval-runs/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TEST_RUN_ID&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;folders&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readdirSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;evalTraceFolder&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// post run processing goes here&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`View eval results: http://localhost:4001/evals?folder=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;evalTraceFolder&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;onTestCaseResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testCase&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TestCase&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;skipped&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pending&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testCase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;evalId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;testNameToEvalId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testCase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;testCaseResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;evalId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testCase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;passed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="c1"&gt;// other stuff..&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;traceFolder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/tmp/eval-runs/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TEST_RUN_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;testNameToEvalId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;traceFolder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;result.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testCaseResult&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Vitest is a powerful and versatile test runner for the TypeScript which can be straightforwardly adapted to run an Eval.&lt;/p&gt;

&lt;p&gt;Vercel AI’s &lt;a href="https://sdk.vercel.ai/docs/reference/ai-sdk-core/generate-text#returns" rel="noopener noreferrer"&gt;Response objects&lt;/a&gt; contain almost everything needed to see what happened in an Eval.&lt;/p&gt;

&lt;p&gt;For full details check out the &lt;a href="https://github.com/xataio/agent/commit/ff3a2534281efc6312182ff8c55e99dca5bcabcd" rel="noopener noreferrer"&gt;Pull Request&lt;/a&gt; which introduces in the open source Xata Agent.&lt;/p&gt;

&lt;p&gt;If you're interested in monitoring and diagnosing issues with your PostgreSQL database check out the &lt;a href="https://github.com/xataio/agent" rel="noopener noreferrer"&gt;Xata Agent repo&lt;/a&gt; - issues and contributions are always welcome. You can also &lt;a href="https://tally.so/r/wgvkgM" rel="noopener noreferrer"&gt;join the waitlist&lt;/a&gt; for our cloud-hosted version.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>typescript</category>
      <category>vercel</category>
      <category>postgres</category>
    </item>
    <item>
      <title>If pg_dump is not a backup tool, what is?</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Thu, 29 Aug 2024 12:30:00 +0000</pubDate>
      <link>https://forem.com/xata/if-pgdump-is-not-a-backup-tool-what-is-28hc</link>
      <guid>https://forem.com/xata/if-pgdump-is-not-a-backup-tool-what-is-28hc</guid>
      <description>&lt;p&gt;We all remember the sad day when Pluto was decommissioned as a planet. I’ve always liked Pluto—maybe it’s the name, or maybe it’s because it was the smallest, and I like supporting the underdog. Either way, it was sort of my favorite planet.&lt;/p&gt;

&lt;p&gt;Recently, while &lt;a href="https://xata.io/blog/cve-2024-7348-postgres-upgrade" rel="noopener noreferrer"&gt;writing about the vulnerability affecting &lt;code&gt;pg_dump&lt;/code&gt;&lt;/a&gt;, the topic of decommissioning &lt;code&gt;pg_dump&lt;/code&gt; came up on Twitter. Unlike the nostalgic feelings many had for Pluto, there was less reluctance to see &lt;code&gt;pg_dump&lt;/code&gt; reclassified. In fact, some people were eager to retire it as a backup utility, and I even got a bit of pushback for still referring to &lt;code&gt;pg_dump&lt;/code&gt; as one 🙂&lt;/p&gt;

&lt;p&gt;I was talking to my colleague Simona the other day, and she mentioned that everybody in Postgres circles says, "pg_dump is not a backup tool," but perhaps it’s not always explained well why it is not.&lt;/p&gt;

&lt;p&gt;If you visit the &lt;code&gt;pg_dump&lt;/code&gt; documentation page, it literally says this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;pg_dump&lt;/code&gt; is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But there is an upcoming change in its future, starting from Postgres 18, thanks to a commit from Peter Eisentraut.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commit:&lt;/strong&gt; &lt;a href="https://gitlab.com/postgres/postgres/-/commit/4f29394ea941f688fd4faf7260d2c198931ca797" rel="noopener noreferrer"&gt;&lt;code&gt;4f29394ea941f688fd4faf7260d2c198931ca797&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;doc: Avoid too prominent use of "backup" on pg_dump man page&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Some users inadvertently rely on &lt;code&gt;pg_dump&lt;/code&gt; as their primary backup tool,&lt;br&gt;
when better solutions exist. The &lt;code&gt;pg_dump&lt;/code&gt; man page is arguably&lt;br&gt;
misleading in that it starts with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"pg_dump is a utility for backing up a PostgreSQL database."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This tones this down a little bit, by replacing most uses of "backup"&lt;br&gt;
with "export" and adding a short note that &lt;code&gt;pg_dump&lt;/code&gt; is not a&lt;br&gt;
general-purpose backup tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Discussion:&lt;/strong&gt; &lt;a href="https://www.postgresql.org/message-id/flat/70b48475-7706-4268-990d-fd522b038d96%40eisentraut.org" rel="noopener noreferrer"&gt;https://www.postgresql.org/message-id/flat/70b48475-7706-4268-990d-fd522b038d96%40eisentraut.org&lt;/a&gt;&lt;/p&gt;


&lt;/blockquote&gt;

&lt;p&gt;I am pleased that we are using better wording to explain the scope of &lt;code&gt;pg_dump&lt;/code&gt; in the Postgres documentation.&lt;/p&gt;

&lt;p&gt;What &lt;code&gt;pg_dump&lt;/code&gt; does is take a snapshot of the database at that moment. However, it excludes certain crucial components, like WAL files and the global objects in the entire Postgres cluster.&lt;/p&gt;

&lt;p&gt;In a moment of database disruption, you will need to restore your backups. And if you only have a &lt;code&gt;pg_dump&lt;/code&gt; export and did not also back up your WAL directory, you would lose some of the transaction history. If you are doing replication, that would complicate things and could result in longer downtime or a more extended maintenance window if it’s a planned operation.&lt;/p&gt;

&lt;p&gt;Backups ideally should be continuous. You shouldn’t have to worry about managing WALs and other objects back and forth between servers. You should have regular backups, continuously adding to them, and ideally restore them regularly to ensure everything is going well. It’s also important to monitor their health.&lt;/p&gt;

&lt;p&gt;I won’t go into details about different backup tools here, but suffice it to say, &lt;code&gt;pg_dump&lt;/code&gt; is not sufficient alone as a backup mechanism. It is a useful utility and helps a lot, but it’s not something you should consider a complete solution for backing up your Postgres clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;To recap, let’s discuss a few shortcomings of &lt;code&gt;pg_dump&lt;/code&gt; and clarify why a &lt;strong&gt;dump&lt;/strong&gt; and a &lt;strong&gt;backup&lt;/strong&gt; are different things.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pg_dump&lt;/code&gt; creates a snapshot of the database at a specific point in time. This means it captures the state of the database up to that moment but does not account for any changes made afterward. The standard &lt;code&gt;pg_dump&lt;/code&gt; output is a plain-text file containing SQL statements to recreate the database schema and its contents. While it can be in various formats, SQL is the most common. This output is what we refer to as a &lt;strong&gt;dump&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;However, this dump file does not include WAL, which are essential for point-in-time recovery and replication. Additionally, global objects such as roles and tablespaces are excluded from the dump, which can lead to inconsistencies during restoration, especially in multi-database setups.&lt;/p&gt;

&lt;p&gt;In contrast, a &lt;strong&gt;backup&lt;/strong&gt; is a broader concept that involves creating copies of both data and metadata to ensure that a database can be restored in case of failure, corruption, or data loss. Backups can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full backup&lt;/strong&gt;: A complete copy of the entire database, including all data and metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incremental backup&lt;/strong&gt;: Captures only the changes made since the last backup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous backup&lt;/strong&gt;: Continuously captures changes, providing ongoing protection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Backups generally include WAL files, allowing for point-in-time recovery. They should be performed regularly, monitored for health, and restored periodically to ensure their integrity.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;pg_dump&lt;/code&gt; serves well for exporting data for migrations or versioning, it is not designed to handle full disaster recovery scenarios. For a robust and reliable backup strategy, tools like Barman, WAL-G, and pgBackRest are better suited to ensure data safety and quick recovery.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>backup</category>
      <category>database</category>
      <category>news</category>
    </item>
    <item>
      <title>Why should you upgrade your PostgreSQL today?</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Wed, 28 Aug 2024 15:15:35 +0000</pubDate>
      <link>https://forem.com/xata/why-should-you-upgrade-your-postgresql-today-2e88</link>
      <guid>https://forem.com/xata/why-should-you-upgrade-your-postgresql-today-2e88</guid>
      <description>&lt;p&gt;If you're running PostgreSQL versions 12, 13, 14, 15, or 16 and haven’t yet upgraded to the latest minor versions released on August 8, 2024, you might be exposed to a critical security vulnerability known as &lt;strong&gt;&lt;a href="https://www.postgresql.org/support/security/CVE-2024-7348/" rel="noopener noreferrer"&gt;CVE-2024-7348&lt;/a&gt;&lt;/strong&gt;. This vulnerability could allow attackers to execute arbitrary SQL code during &lt;code&gt;pg_dump&lt;/code&gt; operations as the user running &lt;code&gt;pg_dump&lt;/code&gt;, potentially compromising your entire database.&lt;/p&gt;

&lt;h3&gt;
  
  
  The vulnerability: CVE-2024-7348
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;CVE-2024-7348&lt;/strong&gt; is a Time-of-Check Time-of-Use (TOCTOU) race condition vulnerability in &lt;code&gt;pg_dump&lt;/code&gt;, a utility used for backing up PostgreSQL databases. This vulnerability allows an attacker to replace a relation type (such as a table or sequence) with a view or foreign table right when &lt;code&gt;pg_dump&lt;/code&gt; is running. Because &lt;code&gt;pg_dump&lt;/code&gt; often runs with superuser privileges, this attack could execute arbitrary SQL code, leading to unauthorized actions or data corruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Affected versions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL 12 (before 12.20)&lt;/li&gt;
&lt;li&gt;PostgreSQL 13 (before 13.16)&lt;/li&gt;
&lt;li&gt;PostgreSQL 14 (before 14.13)&lt;/li&gt;
&lt;li&gt;PostgreSQL 15 (before 15.8)&lt;/li&gt;
&lt;li&gt;PostgreSQL 16 (before 16.4)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your PostgreSQL instance falls within any of these versions and has not been upgraded to the latest minor release, you are vulnerable to this exploit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the risk
&lt;/h3&gt;

&lt;p&gt;The attack described in CVE-2024-7348 is not just theoretical. An attacker with sufficient privileges to create and drop objects in the database could inject malicious SQL that &lt;code&gt;pg_dump&lt;/code&gt; would then execute. This could be devastating, especially if &lt;code&gt;pg_dump&lt;/code&gt; is running as a superuser, which is often the case in many environments.&lt;/p&gt;

&lt;p&gt;Here’s how the attack works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The attacker creates a non-temporary object in the database, like a sequence.&lt;/li&gt;
&lt;li&gt;Just before &lt;code&gt;pg_dump&lt;/code&gt; begins, the attacker replaces this object with a view or a foreign table that contains malicious SQL.&lt;/li&gt;
&lt;li&gt;When &lt;code&gt;pg_dump&lt;/code&gt; attempts to back up the database, it inadvertently executes the injected SQL code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This sequence of actions is possible because &lt;code&gt;pg_dump&lt;/code&gt; checks the type of object before starting the backup, but due to the race condition, the object can be replaced before the actual backup begins. The vulnerability hinges on the fact that the backup process trusts the integrity of database objects without verifying them at every step of the operation.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the PostgreSQL team addressed the issue
&lt;/h3&gt;

&lt;p&gt;To mitigate this vulnerability, the PostgreSQL team introduced a fix in the latest minor releases. The fix involves a new server parameter, &lt;code&gt;restrict_nonsystem_relation_kind&lt;/code&gt;, which limits the ability to expand non-builtin views and access foreign tables during &lt;code&gt;pg_dump&lt;/code&gt;. This parameter, when set, effectively closes the loophole that allows the race condition to be exploited.&lt;/p&gt;

&lt;p&gt;However, it's important to note that the protection is only active if both &lt;code&gt;pg_dump&lt;/code&gt; and the server are updated to the versions containing the fix. If either side is running an older version, the vulnerability could still be exploited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our experience at Xata
&lt;/h3&gt;

&lt;p&gt;We first noticed the issue when a user reported a failing &lt;code&gt;pg_dump&lt;/code&gt; query in the &lt;a href="https://discord.com/channels/996791218879086662/1271351087370207273/1271351087370207273" rel="noopener noreferrer"&gt;Xata Discord help channel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpadtybh8xbjoy82fags.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpadtybh8xbjoy82fags.png" alt="Huh, that looks like a new parameter" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same error soon started appearing in some of our CI tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dump: error: query failed: couldn't get first string argument
pg_dump: detail: Query was: SELECT set_config(name, 'view, foreign-table', false) FROM pg_settings WHERE name = 'restrict_nonsystem_relation_kind'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While checking the query, we noticed the &lt;code&gt;restrict_nonsystem_relation_kind&lt;/code&gt; parameter being introduced that is linked to CVE-2024-7348.&lt;/p&gt;

&lt;p&gt;For the query itself, we noticed the issue is on our side and began working on a fix, as shared by Tudor in our Discord channel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1961ydb1o6pcomcj4lti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1961ydb1o6pcomcj4lti.png" alt="A quick fix from the Xata team" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you'd like to join our friendly community, here's an invite to our Discord: &lt;a href="http://xata.io/discord" rel="noopener noreferrer"&gt;Xata Discord&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you should do next
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Upgrade Immediately&lt;/strong&gt;: If you're running an affected version of PostgreSQL, the solution is simple—upgrade to the latest minor version immediately. The following versions include the fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL 12.20&lt;/li&gt;
&lt;li&gt;PostgreSQL 13.16&lt;/li&gt;
&lt;li&gt;PostgreSQL 14.13&lt;/li&gt;
&lt;li&gt;PostgreSQL 15.8&lt;/li&gt;
&lt;li&gt;PostgreSQL 16.4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test After Upgrading&lt;/strong&gt;: After upgrading, thoroughly test your database operations, especially any automated backups or &lt;code&gt;pg_dump&lt;/code&gt; processes, to ensure they function correctly and securely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review User Permissions&lt;/strong&gt;: Consider reviewing the permissions of users who can create or drop database objects. Restricting these privileges can further reduce the risk of exploitation, even if a similar vulnerability arises in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Security vulnerabilities like CVE-2024-7348 remind us of the importance of staying current with software updates, especially for mission-critical infrastructure like databases. By upgrading to the latest PostgreSQL versions, you not only protect your data but also ensure the continued reliability and security of your applications.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>security</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>pgroll update 0.6.0</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Mon, 05 Aug 2024 10:41:14 +0000</pubDate>
      <link>https://forem.com/xata/pgroll-update-060-pf2</link>
      <guid>https://forem.com/xata/pgroll-update-060-pf2</guid>
      <description>&lt;p&gt;&lt;a href="https://xata.io/pgroll" rel="noopener noreferrer"&gt;pgroll&lt;/a&gt; is Xata's open-source schema migration tool for Postgres. The feature that sets it apart from other migration tools is its ability to perform multi-version schema migrations using the expand/contract pattern: while a migration is in progress, &lt;code&gt;pgroll&lt;/code&gt; is able to present two versions of your database schema to client applications - the old version and the new. This means that client applications that depend on the old version of the database schema continue to function while the migration is in progress, making application deployments much simpler. Gone are the days of having to perform &lt;a href="https://planetscale.com/docs/learn/handling-table-and-column-renames#how-to-rename-a-column-on-planetscale" rel="noopener noreferrer"&gt;complicated multi-step workflows&lt;/a&gt; to do simple schema changes without breaking clients.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; is open-source under the Apache-2.0 license and developed in the open on &lt;a href="https://github.com/xataio/pgroll" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We released &lt;a href="https://xata.io/blog/pgroll-schema-migrations-postgres" rel="noopener noreferrer"&gt;pgroll v0.1.0&lt;/a&gt; in October 2023. Since then we've made &lt;a href="https://github.com/xataio/pgroll/releases" rel="noopener noreferrer"&gt;eight further releases&lt;/a&gt; as we continue to build &lt;code&gt;pgroll&lt;/code&gt; and turn it into a first-class open-source schema migration tool for Postgres.&lt;/p&gt;

&lt;p&gt;In this post we'll have a look at the changes that went into the most recent release, &lt;code&gt;v0.6.0&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in pgroll v0.6.0
&lt;/h2&gt;

&lt;p&gt;There was a three month gap between the release of &lt;a href="https://github.com/xataio/pgroll/releases/tag/v0.5.0" rel="noopener noreferrer"&gt;v0.5.0&lt;/a&gt; and &lt;a href="https://github.com/xataio/pgroll/releases/tag/v0.6.0" rel="noopener noreferrer"&gt;v0.6.0&lt;/a&gt;. As a result the changelog for &lt;code&gt;v0.6.0&lt;/code&gt; is fairly extensive:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodj5pmlaouwmjh9j0kvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodj5pmlaouwmjh9j0kvj.png" alt="That's quite a big Changelog" width="800" height="962"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll pick out a few highlights from the changelog rather than going through every change in detail 😴&lt;/p&gt;

&lt;h3&gt;
  
  
  Retry on lock timeout errors
&lt;/h3&gt;

&lt;p&gt;We actually wrote an entire blog post about the Postgres lock graph and why it's important for a migration tool like &lt;code&gt;pgroll&lt;/code&gt; to use appropriate &lt;code&gt;lock_timeout&lt;/code&gt; values on DDL statements and to automatically backoff and retry on lock aquisition failures. As of &lt;code&gt;v0.6.0&lt;/code&gt;, &lt;code&gt;pgroll&lt;/code&gt; now does exactly this. Rather than repeat the same information here, take a look at our blog post on &lt;a href="https://xata.io/blog/migrations-and-exclusive-locks" rel="noopener noreferrer"&gt;schema migrations and the Postgres lock queue&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run all DDL operations before running any data migrations
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; runs migrations using the &lt;a href="https://openpracticelibrary.com/practice/expand-and-contract-pattern/" rel="noopener noreferrer"&gt;expand/contract pattern&lt;/a&gt; which means that, for example, when adding a constraint to a column &lt;code&gt;pgroll&lt;/code&gt; will create a new column, add the constraint to the new column and run a data migration to move data from the old column to the new, using the user-defined &lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; SQL expressions in the migration file. The old column will be removed on migration completion.&lt;/p&gt;

&lt;p&gt;As of &lt;code&gt;v0.6.0&lt;/code&gt;, &lt;code&gt;pgroll&lt;/code&gt; will now run all DDL operations before running any data migrations. Previously, any data migrations required by a &lt;code&gt;pgroll&lt;/code&gt; operation would run inline immediately after the operation that required it. Data migrations are the most time-consuming part of a migration so by running them in a group after DDL has completed, &lt;code&gt;pgroll&lt;/code&gt; ensures that the database schema is fully expanded before running data migrations. This also opens the door to allowing finer control over the data migration process in future releases, for example providing control over batch sizes, &lt;a href="https://github.com/xataio/pgroll/issues/168" rel="noopener noreferrer"&gt;controlling the rate of data migrations&lt;/a&gt; or even running the data migration phase as an entirely separate step some time after DDL has completed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modification of column default values
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; now supports changing column &lt;code&gt;DEFAULT&lt;/code&gt; values. A migration to do so looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"02_change_default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"operations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"alter_column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"events"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"'new default value'"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changing a column default is a versioned operation so the column to which the new default is applied is duplicated and backfilled according to the &lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; SQL supplied with the 'alter column' operation. &lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; default to a simple copy of the field between new and old columns.&lt;/p&gt;

&lt;p&gt;See the documentation for the &lt;a href="https://github.com/xataio/pgroll/tree/main/docs#change-default" rel="noopener noreferrer"&gt;alter column operation&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing multiple column properties in one operation
&lt;/h3&gt;

&lt;p&gt;One significant limitation in versions of &lt;code&gt;pgroll&lt;/code&gt; prior to &lt;code&gt;v0.6.0&lt;/code&gt; was that &lt;code&gt;alter_column&lt;/code&gt; operations could only change one 'property' of a column at a time. If, for example, you wanted to set a column to &lt;code&gt;NOT NULL&lt;/code&gt; and add a &lt;code&gt;CHECK&lt;/code&gt; constraint at the same time this would have to be done as successive migrations. As of &lt;code&gt;v0.6.0&lt;/code&gt; however, there is no restriction on the number of changes that can be made to the same column in the one operation.&lt;/p&gt;

&lt;p&gt;A migration that changes multiple column properties in one operation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"35_alter_column_multiple"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"operations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"alter_column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"events"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"event_name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"'new default value'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"nullable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"unique"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"events_event_name_unique"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"check"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"event_name_length"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"constraint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"length(name) &amp;gt; 3"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"up"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"(SELECT CASE WHEN name IS NULL THEN 'placeholder' ELSE name END)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"down"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the documentation for the &lt;a href="https://github.com/xataio/pgroll/tree/main/docs#alter-column" rel="noopener noreferrer"&gt;alter column operation&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Renaming constraints
&lt;/h3&gt;

&lt;p&gt;Perhaps it's a smaller change than others listed here, but the ability to rename constraints is a useful addition to &lt;code&gt;pgroll&lt;/code&gt; &lt;code&gt;v0.6.0&lt;/code&gt;. A migration to rename a constraint looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"33_rename_check_constraint"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"operations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"rename_constraint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"people"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"from"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name_length"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"to"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name_length_check"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the documentation for the &lt;a href="https://github.com/xataio/pgroll/tree/main/docs#rename-constraint" rel="noopener noreferrer"&gt;rename constraint operation&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running raw SQL migrations on completion
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; supports raw SQL migrations as an 'escape hatch' when there is no other way to perform a migration using the existing &lt;code&gt;pgroll&lt;/code&gt; operations. SQL migrations offer none of the multi-version guarantees inherent in other &lt;code&gt;pgroll&lt;/code&gt; operation types; when a SQL migration runs it is applied directly to the underlying schema and &lt;code&gt;pgroll&lt;/code&gt; does not present versions of the schema before and after the change as it does for other operation types.&lt;/p&gt;

&lt;p&gt;In versions of &lt;code&gt;pgroll&lt;/code&gt; before &lt;code&gt;v0.6.0&lt;/code&gt;, SQL migrations would always be run on migration start. As of &lt;code&gt;v0.6.0&lt;/code&gt;, SQL migrations have the option to run on migration complete instead by setting the &lt;code&gt;onComplete&lt;/code&gt; field. Such a migration looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"32_sql_on_complete"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"operations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"sql"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"up"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ALTER TABLE people ADD COLUMN birth_date timestamp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"onComplete"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ability to run a raw SQL migration on completion allows the SQL statement to reference the versioned views that &lt;code&gt;pgroll&lt;/code&gt; creates for any non raw SQL operations in the same migration.&lt;/p&gt;

&lt;p&gt;See the documentation for the &lt;a href="https://github.com/xataio/pgroll/tree/main/docs#raw-sql" rel="noopener noreferrer"&gt;raw SQL operation&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using pgroll as a module
&lt;/h2&gt;

&lt;p&gt;The next two features relate to using &lt;code&gt;pgroll&lt;/code&gt; from within other programs rather than as a standalone CLI tool. &lt;code&gt;pgroll&lt;/code&gt; is written in Go and its core APIs can be imported and used as a module from other Go programs. These APIs are not stable yet; they will become stable as part of a &lt;code&gt;v1.0.0&lt;/code&gt; release, but can be used today by importing the relevant packages from the &lt;code&gt;github.com/xataio/pgroll/&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/xataio/pgroll/pkg/migrations"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/xataio/pgroll/pkg/roll"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/xataio/pgroll/pkg/state"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; &lt;code&gt;v0.6.0&lt;/code&gt; adds two new features that are exposed only when using &lt;code&gt;pgroll&lt;/code&gt; as a module from within another Go program:&lt;/p&gt;

&lt;h3&gt;
  
  
  Rewriting or rejecting user input SQL
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; takes SQL as input from the user in various places in migration files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; SQL for data migrations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; SQL for raw SQL migrations&lt;/li&gt;
&lt;li&gt;column &lt;code&gt;DEFAULT&lt;/code&gt; values&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most scenarios the fact that this is untrusted user input is not a problem; the typical use case for a migration tool like &lt;code&gt;pgroll&lt;/code&gt; is for running migrations authored by the same set of users that own the database to which the migrations are applied. However in a multi-tenant environment, migration authors aren't the owners of the database instance where the migrations are run. In such a scenario, it may be desirable to restrict the range of allowable SQL expressions usable within &lt;code&gt;pgroll&lt;/code&gt; migrations.&lt;/p&gt;

&lt;p&gt;As of &lt;code&gt;v0.6.0&lt;/code&gt;, &lt;code&gt;pgroll&lt;/code&gt; allows rewriting and rejecting user input SQL expressions using the &lt;code&gt;SQLTransformer&lt;/code&gt; interface and the &lt;code&gt;roll.WithSQLTransformer&lt;/code&gt; option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;SQLTransformer&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;TransformSQL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A SQL transformer can be provided as an option to the &lt;code&gt;roll.New&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pgURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithSQLTransformer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sqlTransformer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// ...&lt;/span&gt;

&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The provided SQL transformer will be used to rewrite and if necessary, reject, user input SQL expressions before they are executed by &lt;code&gt;pgroll&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hooks
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; &lt;code&gt;v0.6.0&lt;/code&gt; adds a new &lt;code&gt;MigrationHooks&lt;/code&gt; struct that can be used to provide hooks that are called at various points during the migration execution process. The four hooks available in &lt;code&gt;v0.6.0&lt;/code&gt; are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BeforeStartDDL&lt;/code&gt; - called before the DDL phase of migration start&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AfterStartDDL&lt;/code&gt; - called after the DDL phase of migration start is complete&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BeforeCompleteDDL&lt;/code&gt; - called before the DDL phase of migration complete&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AfterCompleteDDL&lt;/code&gt; - called after the DDL phase of migration complete is complete&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the hooks struct itself is defined like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// MigrationHooks defines hooks that can be set to be called at various points&lt;/span&gt;
&lt;span class="c"&gt;// during the migration process&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;MigrationHooks&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// BeforeStartDDL is called before the DDL phase of migration start&lt;/span&gt;
    &lt;span class="n"&gt;BeforeStartDDL&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
    &lt;span class="c"&gt;// AfterStartDDL is called after the DDL phase of migration start is complete&lt;/span&gt;
    &lt;span class="n"&gt;AfterStartDDL&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
    &lt;span class="c"&gt;// BeforeCompleteDDL is called before the DDL phase of migration complete&lt;/span&gt;
    &lt;span class="n"&gt;BeforeCompleteDDL&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
    &lt;span class="c"&gt;// AfterCompleteDDL is called after the DDL phase of migration complete is complete&lt;/span&gt;
    &lt;span class="n"&gt;AfterCompleteDDL&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hooks can be provided to the &lt;code&gt;roll.New&lt;/code&gt; function via an option on construction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pgURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithMigrationHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MigrationHooks&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;BeforeStartDDL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c"&gt;// ...&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;AfterStartDDL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c"&gt;// ...&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;BeforeCompleteDDL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c"&gt;// ...&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;AfterCompleteDDL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;roll&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Roll&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c"&gt;// ...&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="c"&gt;// Hooks will be executed at the appropriate points during the migration&lt;/span&gt;
&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using hooks allows for custom logic to be run at various points during the migration process. For example, a hook could be used to log the start and end of a migration, or to send a notification when a migration completes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pgroll&lt;/code&gt; &lt;code&gt;v0.6.0&lt;/code&gt; is a significant release that adds a number of new features and improvements to the tool. But of course, we're not done yet - work on the next version continues apace, as we work towards a &lt;a href="https://github.com/xataio/pgroll/milestones" rel="noopener noreferrer"&gt;v1.0.0&lt;/a&gt; release of &lt;code&gt;pgroll&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'd love to hear from you if you're using &lt;code&gt;pgroll&lt;/code&gt; in your projects. We welcome external contributions to &lt;code&gt;pgroll&lt;/code&gt;, either in the form of issues on the &lt;a href="https://github.com/xataio/pgroll/issues" rel="noopener noreferrer"&gt;issue tracker&lt;/a&gt; or by getting involved with development directly by opening &lt;a href="https://github.com/xataio/pgroll/pulls" rel="noopener noreferrer"&gt;pull requests&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can follow along with all the news from our recent &lt;a href="https://xata.io/launch-week-elephant-on-the-move" rel="noopener noreferrer"&gt;launch week &lt;/a&gt;, or just &lt;a href="https://xata.io/discord" rel="noopener noreferrer"&gt;pop into Discord&lt;/a&gt; and say hi 👋&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>news</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Postgres major version upgrades with minimal downtime</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Thu, 01 Aug 2024 13:24:50 +0000</pubDate>
      <link>https://forem.com/xata/postgres-major-version-upgrades-with-minimal-downtime-1k0m</link>
      <guid>https://forem.com/xata/postgres-major-version-upgrades-with-minimal-downtime-1k0m</guid>
      <description>&lt;p&gt;One of the challenges of running a Postgres database is upgrading to a new major version. Major version upgrades can introduce new features, performance improvements, and bug fixes, but they can also be complex and time-consuming.&lt;/p&gt;

&lt;p&gt;The most accepted methods for upgrading Postgres to a new major are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.postgresql.org/docs/current/pgupgrade.html" rel="noopener noreferrer"&gt;pg_upgrade&lt;/a&gt; for &lt;strong&gt;in-place upgrade&lt;/strong&gt;, which involves stopping the server while the upgrade is being performed. Even if this usually happens very fast, it means taking the service down for a few minutes, plus leading with issues in the process during that downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;blue-green deployments, which leverages &lt;a href="https://www.postgresql.org/docs/current/logical-replication.html" rel="noopener noreferrer"&gt;logical replication&lt;/a&gt; to create a standby replica (running a newer version) of the primary server. Once replication is done, a switch over is performed, involving cutting writes to the primary, and switching traffic to the replica. Details on this approach later.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Common issues with major Postgres upgrades are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Downtime&lt;/strong&gt;: Planned (and unplanned) downtime is not uncommon when a major version upgrade happens, impacting users and business.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex orchestration&lt;/strong&gt;: Orchestrating a migration is a multi-step process that requires careful planning, with little room for errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapting clients&lt;/strong&gt;: Not only the infra part, in many cases also client apps also need to be aware of the migration and update their database connection details to point to a new host once a migration is done (DNS or Proxies can help here).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For all these reasons, major version upgrades are often postponed and become a major event across multiple engineering teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Moving Database Branches in Xata
&lt;/h3&gt;

&lt;p&gt;The most interesting feature from our &lt;a href="https://xata.io/blog/postgresql-dedicated-clusters-beta" rel="noopener noreferrer"&gt;dedicated clusters public beta announcement&lt;/a&gt; this week is the ability to move databases and branches between clusters. This allows Xata customers to move a database between two Postgres clusters while it's still being accessed. You can use this as a way to move a database from a shared cluster in our free tier to your own dedicated cluster, for better performance &amp;amp; isolation.&lt;/p&gt;

&lt;p&gt;This feature also allows to perform near zero-downtime major Postgres version upgrades by moving a database between two clusters running on different versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;We implemented a blue-green approach in order to offer the ability to move databases, hiding all the complexity behind a simple user experience.&lt;/p&gt;

&lt;p&gt;In order to understand the process of moving a database, it is useful to explain some key parts of our infrastructure that contribute to the process: All Xata databases can be accessed from the same proxy service, that handles authentication and routes traffic to the cluster hosting the target database:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9cwme2jke7jthbmutll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9cwme2jke7jthbmutll.png" alt="Xata clusters and proxy" width="600" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This allows us to have fine grained control over connections, ensure security and many features to come. In the case of moving databases, it guarantees that we are able to redirect traffic to the correct cluster as needed.&lt;/p&gt;

&lt;p&gt;This is the process to move a database branch when requested:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nyamir7cq1s9njtn400.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nyamir7cq1s9njtn400.png" alt="Moving a database branch between clusters" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can divide it in four main steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;In this process we recreate the database in the target cluster and configure replication to start syncing it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block schema changes in the source cluster. This allows us to replicate the database consistently, without missing any change.&lt;/li&gt;
&lt;li&gt;Replicate the database schema in the target cluster.&lt;/li&gt;
&lt;li&gt;Create a [publication (&lt;a href="https://www.postgresql.org/docs/current/logical-replication-publication.html" rel="noopener noreferrer"&gt;https://www.postgresql.org/docs/current/logical-replication-publication.html&lt;/a&gt;) in the source cluster.&lt;/li&gt;
&lt;li&gt;Subscribe to it in the target.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Wait for initial sync
&lt;/h3&gt;

&lt;p&gt;With the databases now replicating, we need to wait for the sync process to be up to date, so the replica is following close enough to the primary.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the replication slots created by the subscription, wait for their &lt;a href="https://www.postgresql.org/docs/current/datatype-pg-lsn.html" rel="noopener noreferrer"&gt;LSN&lt;/a&gt; distance to be low enough.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Switch
&lt;/h3&gt;

&lt;p&gt;Target cluster (replica) is close enough to be promoted as the new primary. We start the switch process. It's critical that this step is performed as quickly as possible, as we want to keep downtime to a minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block writes to the source cell. We do this by revoking write permissions from the accessing roles.&lt;/li&gt;
&lt;li&gt;Wait for the replication slot. With writes stopped, we need to ensure that all the ones that went through make it to the target cluster.&lt;/li&gt;
&lt;li&gt;Switch traffic to the target cluster (new primary). All new connections will go to it.&lt;/li&gt;
&lt;li&gt;Kill previously existing connections to the source cluster, ensuring they reconnect to the new one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;Cleanup all created slots + garbage in the source cell.&lt;/p&gt;

&lt;p&gt;With these steps the process is done, the database is now accessible in the newer version. It's important to note that this process offers reasonable guarantees: reads work during the whole process, and writes are only blocked for a brief period of time (normally under 100ms).&lt;/p&gt;

&lt;h2&gt;
  
  
  Future work
&lt;/h2&gt;

&lt;p&gt;We are excited about this new feature, and we have many ideas about how we could evolve it in the future, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Providing more detailed information about the move progress.&lt;/li&gt;
&lt;li&gt;Achieve real zero-downtime by holding, then redirecting writes in the proxy, effectively avoiding killing any connection.&lt;/li&gt;
&lt;li&gt;Allow for schema changes while move happens.&lt;/li&gt;
&lt;li&gt;Provide more fine grained control over the process. For instance, allowing users to decide when to switch, and allow to send traffic to the target cell before the switch, for instance to perform custom tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let us know what you think
&lt;/h2&gt;

&lt;p&gt;We are excited to share this with you and look forward to hearing your feedback! Want to see major version upgrades in action? Check out this quick demo video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/5N2mX06MXII"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you have any suggestions or questions, please open an feature request in &lt;a href="https://xata.canny.io/feature-requests" rel="noopener noreferrer"&gt;Canny&lt;/a&gt;, reach out to us on &lt;a href="https://xata.io/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or follow us on &lt;a href="https://twitter.com/xata" rel="noopener noreferrer"&gt;X / Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can recap on all of our launch week announcements every day on our &lt;a href="https://xata.io/launch-week-elephant-on-the-move" rel="noopener noreferrer"&gt;launch week page&lt;/a&gt; 🦋&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>news</category>
      <category>sql</category>
    </item>
    <item>
      <title>Introducing pgstream: Postgres replication with DDL changes</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Tue, 30 Jul 2024 16:57:35 +0000</pubDate>
      <link>https://forem.com/xata/introducing-pgstream-postgres-replication-with-ddl-changes-1iio</link>
      <guid>https://forem.com/xata/introducing-pgstream-postgres-replication-with-ddl-changes-1iio</guid>
      <description>&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;At Xata, Postgres takes centre stage. And while it is our main database, we also offer other features that require us to extend its reach while keeping the data in sync. A good example of this is our &lt;a href="https://xata.io/full-text-search" rel="noopener noreferrer"&gt;full-text search&lt;/a&gt; feature, which enables the use of Elasticsearch on top of Postgres. To keep these two datastores in sync, we capture and identify data and schema changes in Postgres and push these modifications downstream to Elasticsearch with minimal latency. This is often referred to as CDC (Change Data Capture).&lt;/p&gt;

&lt;p&gt;So now you know our use case, but why did we build our own replication tool? There's many established solutions out there, but we had very specific requirements, which included support for continuous tracking of schema changes (DDL). This was something that existing tooling didn't support at the time. Database schemas have a tendency to change over time - if your CDC tool doesn't support replicating them, you risk data loss and manual intervention to fix your pipeline. There had to be a better way!&lt;/p&gt;

&lt;p&gt;We also wanted a solution that was easy to deploy and operate for both big and small setups, which isn't always the case for existing tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing pgstream
&lt;/h2&gt;

&lt;p&gt;And so &lt;a href="https://github.com/xataio/pgstream" rel="noopener noreferrer"&gt;pgstream&lt;/a&gt; was born! pgstream is an open source CDC command-line tool and library. Some of its key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Schema change tracking and replication of DDL changes:&lt;/strong&gt; it's no surprise that this became an integral feature of pgstream, since it was one of the biggest requirements. We will go into a bit more detail on how this is implemented below.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular deployment configuration:&lt;/strong&gt; pgstream modular implementation allows it to be configured for simple use cases, removing unnecessary complexity and deployment challenges - the only requirement for pgstream is a Postgres database! However, it can also easily integrate with Kafka for more complex use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elasticsearch/OpenSearch replication support:&lt;/strong&gt; out of the box support for replicating Postgres data and schema changes to an Elasticsearch compatible store, with special handling of field IDs to minimise re-indexing caused by column renames.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhook support:&lt;/strong&gt; out of the box support to invoke a webhook endpoint whenever your source data changes. Helpful for reacting to specific data changes seamlessly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does pgstream work?
&lt;/h2&gt;

&lt;p&gt;Internally, pgstream is constructed as a streaming pipeline, where data from one module streams into the next, eventually reaching the configured output plugins. pgstream keeps track of schema changes and replicates them alongside the data changes to maintain a consistent view of the source data downstream. This modular approach makes adding and integrating output plugin implementations simple and painless.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusd4if3pxrb6ubbugxya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusd4if3pxrb6ubbugxya.png" alt="pgstream architecture" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tracking schema changes
&lt;/h3&gt;

&lt;p&gt;One of the main differentiators of pgstream is that it tracks and replicates schema changes automatically. How? It relies on SQL triggers that will populate a Postgres table (&lt;code&gt;pgstream.schema_log&lt;/code&gt;) containing a history log of all DDL changes for a given schema. Whenever a schema change occurs, this trigger creates a new row in the schema log table with the schema encoded as a JSON value. This table tracks all the schema changes, forming a linearised change log that is then parsed and used within the pgstream pipeline to identify modifications and push the relevant changes downstream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyx7lltoh3kso66b6c57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyx7lltoh3kso66b6c57.png" alt="Tracking schema changes" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The schema and data changes are part of the same linear stream - the downstream consumers always observe the schema changes as soon as they happen, before any data arrives that relies on the new schema. This prevents data loss and manual intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;Disclaimer: There are a lot of references in this section to the &lt;a href="https://www.postgresql.org/docs/current/wal-intro.html" rel="noopener noreferrer"&gt;WAL&lt;/a&gt; (Write Ahead Logging). It refers to a sequential record of all changes made to a database, and a key component to Postgres replication.&lt;/p&gt;

&lt;p&gt;Now, let's dive a little deeper into the stream!&lt;/p&gt;

&lt;p&gt;At a high level, the internal implementation is split into WAL listeners and WAL processors.&lt;/p&gt;

&lt;h3&gt;
  
  
  WAL listener
&lt;/h3&gt;

&lt;p&gt;A listener is anything that listens to WAL data, regardless of the source. It has a single responsibility: consume and manage the WAL events, delegating the processing of those entries to modules that form the processing pipeline. Depending on the listener implementation, it might be required to also have a checkpointer to flag the events as processed once the processor is done.&lt;/p&gt;

&lt;p&gt;There are currently two implementations of the listener:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Postgres listener&lt;/strong&gt;: listens to WAL events directly from the replication slot. Since the WAL replication slot is sequential, the Postgres WAL listener is limited to run as a single process. The associated Postgres checkpointer will sync the &lt;a href="https://pgpedia.info/l/LSN-log-sequence-number.html" rel="noopener noreferrer"&gt;LSN&lt;/a&gt; (Log Sequence Number) so that the replication lag doesn't grow indefinitely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka reader&lt;/strong&gt;: reads WAL events from a Kafka topic. It can be configured to run concurrently by using partitions and Kafka consumer groups, applying a fan-out strategy to the WAL events. The data will be partitioned by database schema by default, but can be configured when using pgstream as a library. The associated Kafka checkpointer will commit the message offsets per topic/partition so that the consumer group doesn't process the same message twice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  WAL processor
&lt;/h3&gt;

&lt;p&gt;A processor processes a WAL event. Depending on the implementation it might also be required to checkpoint the event once it's done processing it as described above.&lt;/p&gt;

&lt;p&gt;There are currently three implementations of the processor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kafka batch writer:&lt;/strong&gt; it writes the WAL events into a Kafka topic, using the event schema as the Kafka key for partitioning. This implementation allows the sequential WAL events to fan-out, while acting as an intermediate buffer to avoid the replication slot to grow when there are slow consumers. It has an internal memory-guarded buffering system to limit the memory usage of the buffer. The buffer is sent to Kafka based on the configured linger time and maximum size. It treats both data and schema events equally, since it disregards the content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search batch indexer:&lt;/strong&gt; it indexes the WAL events into an OpenSearch/Elasticsearch compatible search store. It implements the same kind of mechanism as the Kafka batch writer to ensure continuous processing from the listener, and it also uses a batching mechanism to minimise search store calls. The search mapping logic is configurable when used as a library. The WAL event identity is used as the search store document id, and if no other version is provided, the LSN is used as the document version. Events that do not have an identity are not indexed. Schema events are stored in a separate search store index (&lt;code&gt;pgstream&lt;/code&gt;), where the schema log history is kept for use within the search store (i.e. read queries).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhook notifier:&lt;/strong&gt; it sends a notification to any webhooks that have subscribed to the relevant WAL event. It relies on a subscription HTTP server receiving the subscription requests and storing them in the shared subscription store which is accessed whenever a WAL event is processed. It sends the notifications to the different subscribed webhooks in parallel based on a configurable number of workers (client timeouts apply). Similar to the two previous processor implementations, it uses an internal memory-guarded buffering system which separates the WAL event processing from the webhook sending, optimising the processor latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to the implementations described above, there's an optional processor decorator, the translator, that injects some of the pgstream logic into the WAL event. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data events:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Setting the WAL event identity. If provided, it will use the configured id finder (only available when used as a library), otherwise it will default to using the table primary key/unique not null column.&lt;/li&gt;
&lt;li&gt;Setting the WAL event version. If provided, it will use the configured version finder (only available when used as a library), otherwise it will default to using the event LSN.&lt;/li&gt;
&lt;li&gt;Adding pgstream IDs to all columns. This allows us to have a constant identifier for a column, so that if there are renames, the column id doesn't change. This is particularly helpful for the search store, where a rename would require a re-index, which can be costly depending on the data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Schema events:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Acknowledging the new incoming schema in the Postgres &lt;code&gt;pgstream.schema_log&lt;/code&gt; table.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;This is only the beginning! We plan to continue developing pgstream and exploring how it can make it easier to replicate data.&lt;/p&gt;

&lt;p&gt;Here are some of the items in our development pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for multiple Kafka topics&lt;/li&gt;
&lt;li&gt;Additional Postgres plugin support (&lt;code&gt;pgoutput&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Advanced data stream filtering&lt;/li&gt;
&lt;li&gt;Automatic backfill of existing data&lt;/li&gt;
&lt;li&gt;Additional Kafka serialisation formats (&lt;code&gt;avro&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Additional output plugin support (&lt;code&gt;postgres&lt;/code&gt;,&lt;code&gt;clickhouse&lt;/code&gt;, &lt;code&gt;snowflake&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are excited to share this with you and look forward to your feedback! Want to see pgstream in action? Check out this quick demo video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0EvSE7z6p_g"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you have any suggestions or questions, please open an issue in our &lt;a href="https://github.com/xataio/pgstream" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;,&lt;br&gt;
reach out to us on &lt;a href="https://xata.io/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or follow us on &lt;a href="https://twitter.com/xata" rel="noopener noreferrer"&gt;X / Twitter&lt;/a&gt;. We'd love&lt;br&gt;
to hear from you and keep you up to date with the latest progress on &lt;code&gt;pgstream&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>postgres</category>
      <category>database</category>
      <category>kafka</category>
    </item>
    <item>
      <title>Introducing multi-version schema migrations</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Sat, 27 Jul 2024 13:10:00 +0000</pubDate>
      <link>https://forem.com/xata/introducing-multi-version-schema-migrations-6hh</link>
      <guid>https://forem.com/xata/introducing-multi-version-schema-migrations-6hh</guid>
      <description>&lt;p&gt;Today we are excited to announce multi-version schema migrations in Xata, which utilizes our open-source library &lt;a href="https://github.com/xataio/pgroll" rel="noopener noreferrer"&gt;pgroll&lt;/a&gt; under the hood. 🎉&lt;/p&gt;

&lt;p&gt;Multi-version schema migrations tackle one of the most painful aspects of application deployment: keeping your application code and database schema in sync.&lt;/p&gt;

&lt;p&gt;Multi-version schema migrations allow both the old and new versions of your database schema to be live and accessible during a schema migration. This greatly simplifies deployment of your applications as both the currently deployed and the new version of your application see the version of the database schema that they expect. And because both versions of the database schema are live at the same time, rollbacks become trivial.&lt;/p&gt;

&lt;p&gt;Behind the scenes, Xata does the heavy-lifting to make this work by presenting different versioned views of your application schema to each version of your application. It then transparently migrates data between the old and new versions of your database schema as data is written to each version.&lt;/p&gt;

&lt;h1&gt;
  
  
  Application rollouts and database schema changes
&lt;/h1&gt;

&lt;p&gt;Rolling out changes to applications in production can be complicated and error prone when there are also database schema changes required to support a new version. During the deployment of the new application version, there will be a period during which both old and new versions of the application are live simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr2fmji23yqpr5c8x3fz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcr2fmji23yqpr5c8x3fz.png" alt="v1 and v2 applications will be live at the same time during an application rollout" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers have to carefully consider how the two application versions will be able to run simultaneously while the rollout is in progress. This has led to the rise of complicated multi-step workflows for performing simple schema changes, such as PlanetScale's &lt;a href="https://planetscale.com/docs/learn/handling-table-and-column-renames#how-to-rename-a-column-on-planetscale" rel="noopener noreferrer"&gt;six-step guide&lt;/a&gt; for performing a column rename operation.&lt;/p&gt;

&lt;p&gt;Wouldn't it be easier if you could keep both the old and new versions of your database schema live at the same time during a schema migration and have applications connect to the right version?&lt;/p&gt;

&lt;h1&gt;
  
  
  Multi-version schema migrations in Xata
&lt;/h1&gt;

&lt;p&gt;Multi-version schema migrations in Xata allow you to start a migration from Xata and have both versions of your database schema available to applications for the duration of the migration. When your application rollout is complete, the migration can be completed leaving you with just the final version of your database schema.&lt;/p&gt;

&lt;p&gt;Let's take a look at how this works in practice.&lt;/p&gt;

&lt;p&gt;First, we'll create a table in a fresh database, populate it with some data and then create a migration to add a &lt;code&gt;NOT NULL&lt;/code&gt; constraint to a column in the table. We'll see how multi-version schema migrations allow two versions of the database schema, one with the constraint and one without, to be made available to applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the schema
&lt;/h2&gt;

&lt;p&gt;We'll use Xata to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a simple &lt;code&gt;users&lt;/code&gt; table with a &lt;code&gt;name&lt;/code&gt; column&lt;/li&gt;
&lt;li&gt;Populate the table with some random data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a new Xata database (with Postgres access enabled), we can create a new table in the application and populate it with some random data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gfveceut2lxvu0n9ixx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gfveceut2lxvu0n9ixx.png" alt="Create a new users table in Xata and add a name column" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxpmgdg3h0a6ecdnj44y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxpmgdg3h0a6ecdnj44y.png" alt="Generate some random data to fill the table" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the table and data in place, we can create a migration to add a &lt;code&gt;NOT NULL&lt;/code&gt; constraint to the &lt;code&gt;name&lt;/code&gt; column.&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration editor
&lt;/h2&gt;

&lt;p&gt;The migration editor is used to apply migrations to the database schema. Migrations in Xata use &lt;a href="https://github.com/xataio/pgroll" rel="noopener noreferrer"&gt;pgroll&lt;/a&gt;, our open-source migration library, under the hood so the format of the migration is a JSON object that describes the changes to be made to the database schema.&lt;/p&gt;

&lt;p&gt;We'll create a migration to add a &lt;code&gt;NOT NULL&lt;/code&gt; constraint to the &lt;code&gt;name&lt;/code&gt; column in the &lt;code&gt;users&lt;/code&gt; table. The migration to do so looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"alter_column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"table"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"users"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"column"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"nullable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"up"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"(SELECT CASE WHEN name IS NULL THEN '(unknown)' ELSE name END)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"down"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The SQL expression in the &lt;code&gt;up&lt;/code&gt; field is used to migrate data from the old version of the &lt;code&gt;name&lt;/code&gt; column that did not have the &lt;code&gt;NOT NULL&lt;/code&gt; constraint to the new version of the column that does. This means the SQL expression needs to describe what to do with any &lt;code&gt;NULL&lt;/code&gt; values that may already exist in the &lt;code&gt;name&lt;/code&gt; field; here we are going to rewrite any &lt;code&gt;NULL&lt;/code&gt;s to a placeholder value of &lt;code&gt;(unknown)&lt;/code&gt;. Any non-&lt;code&gt;NULL&lt;/code&gt; values are copied over to the new version of the &lt;code&gt;name&lt;/code&gt; column without modification.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;down&lt;/code&gt; SQL expression does the opposite; it describes how values written to the new version of the &lt;code&gt;name&lt;/code&gt; column will be migrated back to the old version of the column. Here we simply copy the value without modification.&lt;/p&gt;

&lt;p&gt;Arbitrary transformations can be applied to the data during the migration, allowing for complex data migrations to be performed as part of the schema migration.&lt;/p&gt;

&lt;p&gt;We can take this migration and run it via the Migration Editor:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm5va9cf18jkxzu3kqzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm5va9cf18jkxzu3kqzw.png" alt="Start a migration to add a NOT NULL constraint to the users table" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the migration is started, the table UI shows that we have two versions of the &lt;code&gt;name&lt;/code&gt; column in the &lt;code&gt;users&lt;/code&gt; table, one with the &lt;code&gt;NOT NULL&lt;/code&gt; constraint and one without (this is the &lt;a href="https://blog.thepete.net/blog/2023/12/05/expand/contract-making-a-breaking-change-without-a-big-bang/" rel="noopener noreferrer"&gt;expand/contract pattern&lt;/a&gt; in practice):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfq28r10vhxpt6c2k8qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfq28r10vhxpt6c2k8qu.png" alt="The table editor shows the two versions of the name field, one with the NOT NULL constraint and one without" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new version of the &lt;code&gt;name&lt;/code&gt; column has been filled with the data from the old version of the column. Had there been any &lt;code&gt;NULL&lt;/code&gt; values in the column, these would have been rewritten by the &lt;code&gt;up&lt;/code&gt; data migration we specified in the migration file. The two versions of the &lt;code&gt;name&lt;/code&gt; column are always kept in sync for the duration of the migration; as data is written into either one, it is migrated to the other using the &lt;code&gt;up&lt;/code&gt; and &lt;code&gt;down&lt;/code&gt; SQL expressions from the migration file.&lt;/p&gt;

&lt;p&gt;The migration is now in progress and the two versions of the database schema are live and available to applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to each schema version
&lt;/h2&gt;

&lt;p&gt;In order to be able to take advantage of multi-version schema migrations, applications need to be able to connect to the correct version of the database schema. This is done using the Postgres &lt;a href="https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH" rel="noopener noreferrer"&gt;search_path&lt;/a&gt; setting. The &lt;code&gt;search_path&lt;/code&gt; setting determines which schema will be searched for non-schema-qualified table access. By setting the &lt;code&gt;search_path&lt;/code&gt; appropriately, client applications can access the version of the database schema that is compatible with the deployed application code.&lt;/p&gt;

&lt;p&gt;Applications connect to a Xata database using a connection string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgresql://&amp;lt;YOUR_WORKSPACE_ID&amp;gt;:&amp;lt;YOUR_API_KEY&amp;gt;@us-east-1.sql.xata.sh/exampledb:main?sslmode=require
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the connection established, the application can now read and write data to the database using the correct version of the database schema by setting the &lt;code&gt;search_path&lt;/code&gt; for the session. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SET search_path TO &amp;lt;migration name&amp;gt;;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the migration to use is the name of the active migration, which can be found in the schema history of the branch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6as16h8ukg0p1b6elo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6as16h8ukg0p1b6elo.png" alt="The schema history view shows all schema changes on the branch, along with the name of each migration" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By setting the &lt;code&gt;search_path&lt;/code&gt; to the name of the either the current version of the database schema or the previous version, an application can access the appropriate version of the database schema.&lt;/p&gt;

&lt;p&gt;In the following example session, we can switch the &lt;code&gt;search_path&lt;/code&gt; back and forth between the two versions and see how reads work from each version of the schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Work with the new version of the schema:&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;search_path&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mig_cq778qtlu0oe0bpredl0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Retrieve the data from the `users` table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- +--------------------------+----------------------+&lt;/span&gt;
&lt;span class="c1"&gt;-- | xata_id                  | name                 |&lt;/span&gt;
&lt;span class="c1"&gt;-- |--------------------------|----------------------|&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjeg | as until             |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjhg | beautify ragged      |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjag | because up           |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qj90 | cruelly oof          |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjbg | distant queasy       |&lt;/span&gt;
&lt;span class="c1"&gt;-- ...&lt;/span&gt;

&lt;span class="c1"&gt;-- Work with the old version of the schema:&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;search_path&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mig_cq778jdlu0oe0bpredk0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Retrieve the data from the `users` table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- +--------------------------+----------------------+&lt;/span&gt;
&lt;span class="c1"&gt;-- | xata_id                  | name                 |&lt;/span&gt;
&lt;span class="c1"&gt;-- |--------------------------|----------------------|&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjeg | as until             |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjhg | beautify ragged      |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjag | because up           |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qj90 | cruelly oof          |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjbg | distant queasy       |&lt;/span&gt;
&lt;span class="c1"&gt;-- ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both versions of the schema show the same data in their view of the &lt;code&gt;users&lt;/code&gt; table - the data migration that was specified in the migration has kept the two in sync.&lt;/p&gt;

&lt;p&gt;Next, we'll see how writes into each version of the schema work, particularly when a &lt;code&gt;NULL&lt;/code&gt; value is written into the &lt;code&gt;name&lt;/code&gt; field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Switch back the new schema, which disallows `NULL`s in the `name` field&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;search_path&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mig_cq778qtlu0oe0bpredl0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Attempt to insert a `NULL` value in the name field&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;-- ERROR null value in column "name" of relation "users" violates not-null constraint&lt;/span&gt;

&lt;span class="c1"&gt;-- Switch back the old schema, which allows `NULL`s in the `name` field&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;search_path&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mig_cq778jdlu0oe0bpredk0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Attempt to insert a `NULL` value in the name field&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- Retrieve the data from the `users` table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- +--------------------------+----------------------+&lt;/span&gt;
&lt;span class="c1"&gt;-- | xata_id                  | name                 |&lt;/span&gt;
&lt;span class="c1"&gt;-- |--------------------------|----------------------|&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v88tqrj67kk4bk9l0 | &amp;lt;null&amp;gt;               |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qj8g | yuck frantically     |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjf0 | worriedly disclose   |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjj0 | weaponize well-to-do |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjcg | tensely drafty       |&lt;/span&gt;
&lt;span class="c1"&gt;-- ...&lt;/span&gt;

&lt;span class="c1"&gt;-- Switch back the new schema&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;search_path&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mig_cq778qtlu0oe0bpredl0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Retrieve the data from the `users` table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- +--------------------------+----------------------+&lt;/span&gt;
&lt;span class="c1"&gt;-- | xata_id                  | name                 |&lt;/span&gt;
&lt;span class="c1"&gt;-- |--------------------------|----------------------|&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qj8g | yuck frantically     |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjf0 | worriedly disclose   |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjj0 | weaponize well-to-do |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v88tqrj67kk4bk9l0 | (unknown)            |&lt;/span&gt;
&lt;span class="c1"&gt;-- | rec_cq7v4a18fe4f6dj1qjcg | tensely drafty       |&lt;/span&gt;
&lt;span class="c1"&gt;-- ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the old version of the schema, the &lt;code&gt;name&lt;/code&gt; field allows &lt;code&gt;NULL&lt;/code&gt; values, so the &lt;code&gt;INSERT&lt;/code&gt; statement succeeds and reads from the table show the new row with a &lt;code&gt;NULL&lt;/code&gt; value in the &lt;code&gt;name&lt;/code&gt; field. When the same data is read in the new version of the schema, the same row has the placeholder value of &lt;code&gt;(unknown)&lt;/code&gt; that was defined in the migration.&lt;/p&gt;

&lt;p&gt;This example session shows that two applications, connected to the same database but with their &lt;code&gt;search_path&lt;/code&gt; set to different versions, are subject to different constraints and can have different views of the same data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completing a migration
&lt;/h2&gt;

&lt;p&gt;Once the application rollout is complete and all applications have been updated to the new version, the migration can be completed. Completing a migration means that the old version of the database schema is removed, leaving only the new version.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once a migration is completed, it cannot be rolled back. Before completing a migration, make sure that all applications that depend on the old version of the schema have been updated to the new version so that the old version of the database schema is no longer required.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Completing an in-progress migration can be done from various places in Xata, including the migrations widget on the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ju9hxwv1vs0zl4exsz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ju9hxwv1vs0zl4exsz.png" alt="The migration can be completed from Xata" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once completed, the table shows just one version of the &lt;code&gt;name&lt;/code&gt; column in the &lt;code&gt;users&lt;/code&gt; table again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah1lwvwrsjviw46qshd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah1lwvwrsjviw46qshd7.png" alt="After migration completion only the final version of the name column remains on the table" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling back a migration
&lt;/h2&gt;

&lt;p&gt;Because multi-version schema migrations use the expand/contract pattern to apply schema changes, rollbacks become much easier. The old version of the database schema was never removed so a rollback means simply removing the expanded parts of the database schema (the new version of the &lt;code&gt;users.name&lt;/code&gt; column in our example).&lt;/p&gt;

&lt;p&gt;Like completing a migration, a migration can be rolled back from the migrations widget on the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kum4izpw9cyvknopb40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kum4izpw9cyvknopb40.png" alt="The migration can be easily rolled back" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the migration is rolled back, the new version of the &lt;code&gt;name&lt;/code&gt; column is removed from the &lt;code&gt;users&lt;/code&gt; table and the database schema and data is in the same state as it was before the migration started.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Only active migrations can be rolled back; once a migration has been completed, it cannot be rolled back.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Multi-version schema migrations simplify one of the most common pain points when deploying applications: keeping the deployed versions of your applications and the database schema in sync.&lt;/p&gt;

&lt;p&gt;This new feature allows you to deploy your applications and make database schema changes independently and with confidence.&lt;/p&gt;

&lt;p&gt;We are excited to share this with you and look forward to your feedback! Want to see multi-version schema migrations in action? Check out this quick demo video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/TYoW32lF6Xs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xata.io/start-free" rel="noopener noreferrer"&gt;Sign up for free today&lt;/a&gt;. You can follow along with all of our recent &lt;a href="https://xata.io/launch-week-elephant-on-the-move" rel="noopener noreferrer"&gt;launch week updates&lt;/a&gt;,&lt;a href="https://xata.io/discord" rel="noopener noreferrer"&gt;join us Discord&lt;/a&gt; and say hi 👋&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>data</category>
      <category>news</category>
    </item>
    <item>
      <title>Announcing the public beta for dedicated clusters</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Thu, 25 Jul 2024 18:58:29 +0000</pubDate>
      <link>https://forem.com/xata/announcing-the-public-beta-for-dedicated-clusters-2mj</link>
      <guid>https://forem.com/xata/announcing-the-public-beta-for-dedicated-clusters-2mj</guid>
      <description>&lt;p&gt;Earlier this year we announced our &lt;a href="https://xata.io/blog/serverless-postgres-platform" rel="noopener noreferrer"&gt;serverless Postgres platform&lt;/a&gt; and an early access program for &lt;a href="https://xata.io/blog/postgres-dedicated-clusters" rel="noopener noreferrer"&gt;dedicated clusters&lt;/a&gt;. Both of these offerings are core to Xata's evolution to becoming a Postgres database platform for faster development and a worry-free data layer. Serverless Postgres unlocks decades worth of community, tooling and support for our customers. While dedicated clusters provide more predictable costs, better performance and a more scalable solution for large production workloads. The two pair perfectly together, and we'll be diving into what's new in this blog.&lt;/p&gt;

&lt;p&gt;We're extremely excited to announce the next milestone in this evolution, our public beta for dedicated clusters 🎉&lt;/p&gt;

&lt;p&gt;We also want to whole-heartedly thank our EAP customers for not only being early adopters, but providing amazing levels of feedback throughout the journey. Keep a look out, you'll all be receiving some premium Xata swag soon 💜&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a dedicated cluster?
&lt;/h2&gt;

&lt;p&gt;Today, most Xata databases live in a shared cell architecture. Our multi-tenant cells include a Postgres Aurora cluster, an Elasticsearch cluster, high availability built in and all the APIs and services required for development. Within each cell, we split the system for each workspace to include up to 2,000 active databases per cell before we have to spin up a new one. Unless you were an EAP customer, your Xata database is using this shared cell architecture.&lt;/p&gt;

&lt;p&gt;Our dedicated clusters are essentially dedicated cells. So you have all the same benefits of the shared cell architecture, with less limitations caused by shared resources, more control over your infrastructure, decoupled usage based pricing and the confidence that Xata can scale with your business.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8dpgn1clnvkxvf5n9qo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8dpgn1clnvkxvf5n9qo.png" alt="Postgres branches allocated to shared and dedicated clusters" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I won't get into the weeds today, but if you're interested in learning more, &lt;a href="https://xata.io/blog/postgres-dedicated-clusters" rel="noopener noreferrer"&gt;this blog&lt;/a&gt; and &lt;a href="https://xata.io/docs/dedicated-cluster" rel="noopener noreferrer"&gt;our documentation&lt;/a&gt; are great resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started with $1,000 in credit
&lt;/h2&gt;

&lt;p&gt;To kick off this &lt;a href="https://xata.io/launch-week-elephant-on-the-move" rel="noopener noreferrer"&gt;launch week&lt;/a&gt; right and celebrate our public beta release, we will be offering &lt;strong&gt;$1,000 in credit&lt;/strong&gt; for Pro customers that start using dedicated cluster between now and its general availability later this year. This credit will be valid through the end of 2025, but you will only have a few months to qualify. As a customer, you do not need to apply for the credit. Any usage from a dedicated cluster will automatically be deducted and you have full control over what the deployment looks like.&lt;/p&gt;

&lt;p&gt;If you're already a happy Xata customer, you can easily move all of your databases or a single branch to a dedicated cluster in just a few clicks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new
&lt;/h2&gt;

&lt;p&gt;The team has been hard at work over the last few months building out features and functionality to provide the best experience to our beta customers. Starting today you can spin up dedicated Postgres cluster classes ranging in size from &lt;code&gt;db.t4g.medium&lt;/code&gt; to &lt;code&gt;db.r6g.2xlarge&lt;/code&gt; in both &lt;code&gt;us-east-1&lt;/code&gt; and &lt;code&gt;eu-central-1&lt;/code&gt;. We plan to introduce more classes, versions, regions and extensions in the coming months, so &lt;a href="https://xata.io/contact" rel="noopener noreferrer"&gt;please let us know&lt;/a&gt; what you'd like to see.&lt;/p&gt;

&lt;p&gt;Here are just some of the many features and functionality available starting today.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customize your infrastructure for your use case
&lt;/h3&gt;

&lt;p&gt;Whether you're provisioning a new cluster or editing the one you already have, there are numerous configuration options to make sure you have the appropriate infrastructure for your use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokaok358ucsk8ljdymla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokaok358ucsk8ljdymla.png" alt="Provision a new cluster" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, our Postgres databases are &lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;Amazon Aurora instances&lt;/a&gt;. You can trust that your database will have the scalability, reliability and security that AWS is known for. With dedicated clusters you can configure both the Postgres engine version, cluster class and number of replicas for failover and query distribution.&lt;/p&gt;

&lt;p&gt;Autoscaling is a great option to consider if you expect variable usage of your application and database. This will scale up or down as needed, with a simple capacity range you can define so you do not incur any unexpected costs.&lt;/p&gt;

&lt;p&gt;Weekly maintenance windows and daily backup windows can be defined, with full control over when updates and version upgrades are applied.&lt;/p&gt;

&lt;p&gt;Most of the configuration options we provide will impact the monthly cost, and there is a very simple estimated monthly cost presented to you before you provision the cluster.&lt;/p&gt;

&lt;p&gt;As we look ahead, we plan to provide templates that include infrastructure and configuration best practices for specific use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visibility into usage of your cluster
&lt;/h3&gt;

&lt;p&gt;Every dedicated cluster has an overview page, where you can get a sense of what the usage of your infrastructure looks like and where the bottlenecks may be occurring. This view provides visibility into CPU utilization, database connections, freeable memory, storage, receive and transmit throughput and read/write IO.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb8ze2t0w119g9wtyheb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb8ze2t0w119g9wtyheb.png" alt="Dedicated cluster overview page" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are not only useful for monitoring usage, but will give you a sense for what the usage will cost on top of the dedicated infrastructure. For more details around the what the pricing looks like for dedicated clusters, you can look at our &lt;a href="https://xata.io/pricing" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extensions now supported
&lt;/h3&gt;

&lt;p&gt;One of the drawbacks (and reasons for building dedicated clusters) is that shared clusters, while useful for most projects, have certain limitations because of the shared resources. There are simply more security precautions for every new supported piece of functionality we add. Naturally, we cannot provide the same &lt;a href="https://xata.io/docs/dedicated-cluster#extensions" rel="noopener noreferrer"&gt;supported extensions&lt;/a&gt; in shared clusters as we can in dedicated clusters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnq4dp9825rwxop59uyaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnq4dp9825rwxop59uyaz.png" alt="Extensions available today" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've started adding Postgres extensions based on popularity amongst our customer base, starting with &lt;code&gt;plpgsql&lt;/code&gt;, &lt;code&gt;uuid-ossp&lt;/code&gt;, &lt;code&gt;pgcrypto&lt;/code&gt;, &lt;code&gt;vector&lt;/code&gt;, &lt;code&gt;postgres_fdw&lt;/code&gt;, &lt;code&gt;pg_walinspect&lt;/code&gt; and &lt;code&gt;postgis&lt;/code&gt;. We will continue to grow this library of extensions over time, but please let us know if there's one missing you'd like to see.&lt;/p&gt;

&lt;h3&gt;
  
  
  Move your database and branches between clusters
&lt;/h3&gt;

&lt;p&gt;One of the most powerful features that come with this beta release, is the ability to move branches between clusters. You are able to take your main database or one of its branches and move it to another cluster in just a few clicks. We are using this internally to provide a forever-free tier, automatically moving inactive databases to a special cell meant for longer term storage and moving it back to an active cell if re-activated. Our CTO, &lt;a href="https://www.linkedin.com/in/tudorgolubenco/" rel="noopener noreferrer"&gt;Tudor Golubenco&lt;/a&gt;, dives into the economics of our free tier and the technical details behind it in &lt;a href="https://xata.io/blog/postgres-free-tier" rel="noopener noreferrer"&gt;this blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c3yjtd46wk4srwbv6hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c3yjtd46wk4srwbv6hw.png" alt="Move a branch between clusters" width="677" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is one internal use case for this functionality, but it opens up many possibilities for blue-green deployments, testing / developer environments and database management with zero downtime.&lt;/p&gt;

&lt;p&gt;Complimenting the ability to move branches between clusters, you are now able to define a default cluster for your database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vlyp67fmjlinagu9uha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vlyp67fmjlinagu9uha.png" alt="Update your default cluster" width="678" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you choose a default cluster, any new branch within that database will automatically be created there. This is perfect for development workflows that use branching for staging environments or pull request based development. Your dedicated cluster could only be for your production database, while each branch created could be automatically created in a shared cluster or a dedicated development cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/xata" rel="noopener noreferrer"&gt;Stay tuned&lt;/a&gt;, we'll dive into some use cases and workflows that are now possible with the movement of branches later this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL, unlocked
&lt;/h2&gt;

&lt;p&gt;As you'll soon seen, dedicated clusters has only been a portion of our focus since our &lt;a href="https://xata.io/launch-week-unleash-the-elephant" rel="noopener noreferrer"&gt;last launch week&lt;/a&gt;. We've also spent time improving our Postgres offering, the tooling around it and better supporting the use cases and workflows our customers have been asking for.&lt;/p&gt;

&lt;p&gt;Our SQL offering &lt;a href="https://xata.io/blog/sql-over-http" rel="noopener noreferrer"&gt;is built on a proxy&lt;/a&gt;. This approach allows us to provide the best and most secure serverless experience possible. Over the last few months, we've been heads down on making sure that our Postgres offering works with the ORMs and tooling you know and love.&lt;/p&gt;

&lt;h3&gt;
  
  
  Even more compatible with your favorite clients and tooling
&lt;/h3&gt;

&lt;p&gt;We've put a special focus on making sure &lt;a href="https://orm.drizzle.team/" rel="noopener noreferrer"&gt;Drizzle&lt;/a&gt;, &lt;a href="https://www.prisma.io/" rel="noopener noreferrer"&gt;Prisma&lt;/a&gt;, &lt;a href="https://www.sqlalchemy.org/" rel="noopener noreferrer"&gt;SQLAlchemy&lt;/a&gt; and &lt;a href="https://www.djangoproject.com/" rel="noopener noreferrer"&gt;Django&lt;/a&gt; ORMs work well with our platform. Common administrative and data exploration tools like &lt;a href="https://www.jetbrains.com/datagrip/" rel="noopener noreferrer"&gt;DataGrip&lt;/a&gt;, &lt;a href="https://www.pgadmin.org/" rel="noopener noreferrer"&gt;pgAdmin&lt;/a&gt; and &lt;a href="https://tableplus.com/" rel="noopener noreferrer"&gt;TablePlus&lt;/a&gt; have been put through the wringer to resolve any compatibility hiccups seen over the past few months.&lt;/p&gt;

&lt;p&gt;With our recent support for multiple Postgres schemas, and a configurable &lt;code&gt;search_path&lt;/code&gt;, our offering is now much more compatible with a growing set of clients and tools. Our &lt;a href="https://xata.io/docs/postgres#supported-statements" rel="noopener noreferrer"&gt;list of supported statements&lt;/a&gt; continues to expand, and we aim to maximize compatibility in both shared and dedicated clusters in the coming months.&lt;/p&gt;

&lt;p&gt;Thank you to all our early adopters for providing feedback early on, we couldn't have made this progress without you 🦋&lt;/p&gt;

&lt;h3&gt;
  
  
  Playground has been upgraded to Queries
&lt;/h3&gt;

&lt;p&gt;Outside of the table editor, the most popular part of our application is the Playground. And we've seen significant growth in this workflow for SQL queries over the past year. With this release, we're upgrading the Playground to Queries. Queries will be much more prominent in the navigation, making it really easy to pick up where you left off. You can still run TypeScript and Python code, but we've significantly improved the SQL experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0geq5g3twojreyknz06v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0geq5g3twojreyknz06v.png" alt="Query view with multiple SQL statements" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Previously, the output had only been JSON. We now default to a table view. You can run multiple query statements at once, automatically creating tabs for each of the outputs. All queries are persisted serverside, so they're ready for re-use wherever you are.&lt;/p&gt;

&lt;p&gt;This is just the beginning for Queries. We plan to do much more here. Providing a table output with full feature parity to the editor, the ability to favorite or pin queries and eventually, make them shareable between users in (and outside of) your workspace. Keep on eye on this view, we'll be incrementally improving over time. If you have a specific feature request, feel free to &lt;a href="https://xata.canny.io/feature-requests" rel="noopener noreferrer"&gt;drop it in here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch a demo
&lt;/h2&gt;

&lt;p&gt;Want to see dedicated clusters in action? Check out this quick demo video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/LTenn_pvBuM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you haven't tried out our serverless Postgres offering yet, it's really easy to get started. Simply &lt;a href="https://app.xata.io/signup" rel="noopener noreferrer"&gt;sign up for Xata&lt;/a&gt;, use &lt;code&gt;pg_dump&lt;/code&gt; to export your existing database and &lt;code&gt;pg_restore&lt;/code&gt; to &lt;a href="https://xata.io/docs/postgres#import" rel="noopener noreferrer"&gt;import into Xata&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can't wait to see what you build 🦋&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>news</category>
    </item>
    <item>
      <title>Build your own image gallery CMS</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Wed, 12 Jun 2024 11:00:00 +0000</pubDate>
      <link>https://forem.com/xata/build-your-own-image-gallery-cms-4pao</link>
      <guid>https://forem.com/xata/build-your-own-image-gallery-cms-4pao</guid>
      <description>&lt;p&gt;In this post, I talk about how to create an image gallery CMS with Astro, Xata and Cloudflare Pages. You'll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up Xata&lt;/li&gt;
&lt;li&gt;Create a schema with different column types&lt;/li&gt;
&lt;li&gt;Resize and blur images&lt;/li&gt;
&lt;li&gt;Fetch all records without pagination&lt;/li&gt;
&lt;li&gt;Handle forms in Astro using view transitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://app.xata.io/signin" rel="noopener noreferrer"&gt;Xata&lt;/a&gt; account&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download" rel="noopener noreferrer"&gt;Node.js 18&lt;/a&gt; or later&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare&lt;/a&gt; account&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tech stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://xata.io" rel="noopener noreferrer"&gt;Xata&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Serverless database platform for scalable, real-time applications.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://astro.build" rel="noopener noreferrer"&gt;Astro&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Framework for building fast, modern websites with serverless backend support.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;Tailwind CSS&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;CSS framework for building custom designs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare Pages&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Platform for deploying and hosting web apps with global distribution.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Setting up a Xata database
&lt;/h2&gt;

&lt;p&gt;After you've created a Xata account and are logged in, create a database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfk5e1hr54vhua951nfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfk5e1hr54vhua951nfd.png" alt="Create a database" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to create a table, in this instance &lt;code&gt;uploads&lt;/code&gt;, that contains all the uploaded images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6hi8dq14akkztlca06t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6hi8dq14akkztlca06t.png" alt="Create a table" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Now, click &lt;strong&gt;Schema&lt;/strong&gt; in the left sidebar and create two more tables &lt;code&gt;profiles&lt;/code&gt; and &lt;code&gt;photographs&lt;/code&gt;. You can do this by clicking &lt;strong&gt;Add a table&lt;/strong&gt;. The newly created tables will contain user profile data and user uploaded photograph(s) data respectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r5innliq035mfdpmrqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r5innliq035mfdpmrqn.png" alt="Newly created table" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that completed, you will see the schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj6jo5z9o1gtmybog87p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj6jo5z9o1gtmybog87p.png" alt="Schema displayed" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s move on to adding relevant columns in the tables you've just created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the schema
&lt;/h2&gt;

&lt;p&gt;In the &lt;code&gt;uploads&lt;/code&gt; table, you want to store all the images only (and no other attributes) so that you can create references to the same image object again, if needed.&lt;/p&gt;

&lt;p&gt;Proceed with adding the column named &lt;code&gt;image&lt;/code&gt;. This column is responsible for storing the &lt;code&gt;file&lt;/code&gt; type objects. In our case, the &lt;code&gt;file&lt;/code&gt; type object is for images, but you can use this for storing any kind of blob (e.g. PDF, fonts, etc.) that’s sized up to 1 GB.&lt;/p&gt;

&lt;p&gt;First, click &lt;strong&gt;+ Add column&lt;/strong&gt; and select &lt;strong&gt;File&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3rqdn7benw5gol0m5qn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3rqdn7benw5gol0m5qn.png" alt="Add column" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the column name to &lt;code&gt;image&lt;/code&gt; and to make files public (so that they can be shown to users when they visit the image gallery), check the &lt;strong&gt;Make files public by default&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcljoqiz4wafu5ifvf9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcljoqiz4wafu5ifvf9r.png" alt="Make files public" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;profiles&lt;/code&gt; table, we want to store the attributes such as user’s unique slug (the path of the url where gallery will be displayed of that user), their name, their image with it’s dimension, and it’s transformed image’s base64 hash. You’ll reap the benefits of storing the hash to create literally 0 Cumulative Layout Shift (CLS) page(s).&lt;/p&gt;

&lt;p&gt;Proceed with adding the column named &lt;code&gt;slug&lt;/code&gt;. It is responsible for maintaining the uniqueness of each profile that gets created. Click &lt;strong&gt;+ Add a column&lt;/strong&gt;, select &lt;code&gt;String&lt;/code&gt; type and enter the column name as &lt;code&gt;slug&lt;/code&gt;. To associate a slug with only one user, check the &lt;code&gt;Unique&lt;/code&gt; attribute to make sure that duplicate entries do not get inserted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwza0psek5iq1hu0df29i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwza0psek5iq1hu0df29i.png" alt="Add slug column" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In similar fashion, create &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;image&lt;/code&gt;, &lt;code&gt;height&lt;/code&gt; and &lt;code&gt;width&lt;/code&gt; columns as &lt;code&gt;String&lt;/code&gt; type (but not &lt;code&gt;Unique&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Great, you can also store &lt;code&gt;imageHash&lt;/code&gt; as &lt;code&gt;Text&lt;/code&gt; type so that you can instantly retrieve the image’s blur hash sized up to &lt;code&gt;200 KB&lt;/code&gt;. While &lt;code&gt;String&lt;/code&gt; is a great default type, storing more than 2048 characters would require you to switch to the &lt;code&gt;Text&lt;/code&gt; type. Read more about the limits in &lt;a href="https://xata.io/docs/rest-api/limits#column-limits" rel="noopener noreferrer"&gt;Xata Column limits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;+ Add a column&lt;/strong&gt; and select the &lt;code&gt;Text&lt;/code&gt; type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9k5ycjgjegr73b2dm1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9k5ycjgjegr73b2dm1k.png" alt="Add a column" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the column name as &lt;code&gt;imageHash&lt;/code&gt; and press &lt;code&gt;Create column&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyiurs8po34ca2o6ufld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyiurs8po34ca2o6ufld.png" alt="Enter the column name" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Much similar what we did above, in the &lt;code&gt;photographs&lt;/code&gt; table, we create &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;tagline&lt;/code&gt;, &lt;code&gt;image&lt;/code&gt;, &lt;code&gt;height&lt;/code&gt;, &lt;code&gt;width&lt;/code&gt;, &lt;code&gt;profile-slug&lt;/code&gt;, and &lt;code&gt;slug&lt;/code&gt; as &lt;code&gt;String&lt;/code&gt; type and &lt;code&gt;imageHash&lt;/code&gt; as the &lt;code&gt;Text&lt;/code&gt; type column. The columns &lt;code&gt;slug&lt;/code&gt; and &lt;code&gt;profile-slug&lt;/code&gt; refer to photograph’s and user profile’s slug respectively.&lt;/p&gt;

&lt;p&gt;Lovely! With all that done, the final schema will look something like the following...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0555isfim5jgyntzj9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0555isfim5jgyntzj9p.png" alt="Final schema" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up your project
&lt;/h2&gt;

&lt;p&gt;To set up, clone the app repo and follow this tutorial to learn everything that's in it. To fork the project, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/rishi-raj-jain/image-gallery-cms-with-astro-xata-cloudflare
&lt;span class="nb"&gt;cd &lt;/span&gt;image-gallery-cms-with-astro-xata-cloudflare
pnpm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure Xata with Astro
&lt;/h2&gt;

&lt;p&gt;To use Xata with Astro seamlessly, install the Xata CLI globally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @xata.io/cli &lt;span class="nt"&gt;-g&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, authorize the Xata CLI so it is associated with the logged in account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xata auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2frsra80yrjxdf33823.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2frsra80yrjxdf33823.png" alt="Authorize the CLI" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Now, initialize your project locally with the Xata CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xata init &lt;span class="nt"&gt;--db&lt;/span&gt; https://Rishi-Raj-Jain-s-workspace-80514q.ap-southeast-2.xata.sh/db/image-gallery-cms-with-xata-astro-cloudflare
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Answer some quick one-time questions from the CLI to integrate with Astro.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvx8xjkqcdj82pckujvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvx8xjkqcdj82pckujvw.png" alt="Xata init with Astro project" width="800" height="850"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing form actions in Astro
&lt;/h2&gt;

&lt;p&gt;You can also allow transitions on form submissions by adding the &lt;code&gt;ViewTransitions&lt;/code&gt; component.&lt;br&gt;
Here’s an example of the enabled form actions with view transitions in &lt;code&gt;src/layouts/Layout.astro&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;// File: src/layouts/Layout.astro&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ViewTransitions&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;astro:transitions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;html&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ViewTransitions&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="nx"&gt;stuff&lt;/span&gt; &lt;span class="nx"&gt;here&lt;/span&gt; &lt;span class="o"&gt;--&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/head&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="nx"&gt;stuff&lt;/span&gt; &lt;span class="nx"&gt;here&lt;/span&gt; &lt;span class="o"&gt;--&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/body&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/html&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to colocate the backend and frontend flow for a given page in Astro. Say, you accept a form submission containing the name, slug, and the image URL of the user, process it on the server to generate a blur Base64 hash, and sync it with your Xata serverless database. Here’s how you’d do all of that in a single Astro route (&lt;code&gt;src/pages/profie/create.astro&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;// File: src/pages/profile/create.astro&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;form&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Astro&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Indicate that the request is being processed&lt;/span&gt;
    &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

    &lt;span class="c1"&gt;// Get the user email from the form submissions&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Astro&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;// Get the user slug, name, and image: URL, width, and height from the form submissions&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userSlug&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;slug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userImage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;custom_upload_user__uploaded_image_url&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userImageW&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;custom_upload_user__uploaded_w&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userImageH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;custom_upload_user__uploaded_h&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;

        &lt;span class="c1"&gt;// Create a blur url of the user image&lt;/span&gt;

        &lt;span class="c1"&gt;// Create the user record with the slug&lt;/span&gt;

        &lt;span class="c1"&gt;// Redirect user to the next step&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// pass&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;autocomplete&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;off&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Upload&lt;/span&gt; &lt;span class="nx"&gt;selector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="nx"&gt;required&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;
    &lt;span class="nx"&gt;required&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;slug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Slug (e.g. rishi-raj-jain)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;button&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;submit&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nx"&gt;Create&lt;/span&gt; &lt;span class="nx"&gt;Profile&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nx"&gt;rarr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/button&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/form&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handling image uploads server-side with the Xata SDK
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcwk4g45jlw1fceez17s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcwk4g45jlw1fceez17s.png" alt="(/photograph/create - where users upload their photographs)" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Cloudflare Pages request body size allows up to 100 MB, you’ll be able to handle image uploads on the server-side. Create an Astro endpoint (&lt;code&gt;src/pages/api/upload/index.ts&lt;/code&gt;) to receive POST requests containing image binaries and use the Xata SDK to store them in the &lt;code&gt;uploads&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;After doing sanity checks on the request body, first create a new (empty) record in your &lt;code&gt;uploads&lt;/code&gt; table, and then use it as a reference to place the image (buffer) using the Xata TypeScript SDK. Once successfully completed, the endpoint responds with the image’s &lt;code&gt;public URL&lt;/code&gt;, &lt;code&gt;height&lt;/code&gt; and &lt;code&gt;width&lt;/code&gt; back to the front-end to include in the form fields.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File: src/pages/api/upload/index.ts&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;json&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/lib/response&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;APIContext&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;astro&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Import the Xata Client created by the Xata CLI in src/xata.ts&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;POST&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;APIContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;file&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Do sanity checks on file&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Obtain the uploaded file as an ArrayBuffer&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileBuffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arrayBuffer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// Create an empty record in the uploads table&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uploads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;
    &lt;span class="c1"&gt;// Using the id of the record, insert the file using upload method&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upload&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uploads&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;record&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;image&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;fileBuffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;mediaType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="c1"&gt;// Read the inserted image&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uploads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Destructure its dimension and public URL&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;attributes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Handle errors&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Xata image transformations to create blurred images
&lt;/h2&gt;

&lt;p&gt;Once a user submits their profile on the &lt;code&gt;/profile/create&lt;/code&gt; page, before creating a record in the &lt;code&gt;profiles&lt;/code&gt; table, generate a Base64 buffer of their blurred images. To create blured images from the originals, use Xata image transformations. With Xata image transformations, you’re able to request an on-demand public URL which resizes the image to given height and width, and blur the image. In this particular example, you can resize the image to 100 x 100 dimensions and blur it up to 75% from the original.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;// File: src/pages/profile/create.astro&lt;/span&gt;

&lt;span class="c1"&gt;// Import the Xata Client created by the Xata CLI in src/xata.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="c1"&gt;// Import the transformImage function by Xata Client&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;transformImage&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@xata.io/client&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;form&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Astro&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// Fetch the Xata instance&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;// ...&lt;/span&gt;

    &lt;span class="c1"&gt;// Create a blur URL of the user image&lt;/span&gt;

  &lt;span class="c1"&gt;// Using Xata image transformations to obtain the image URL&lt;/span&gt;
    &lt;span class="c1"&gt;// with a fixed height and width and 75% of it blurred&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userBlurURL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;transformImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userImageURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;blur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;75&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="c1"&gt;// Create a Base64 hash of the blur image URL&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userBlurHash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createBlurHash&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userBlurURL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;// Create the user record with the slug&lt;/span&gt;

    &lt;span class="c1"&gt;// Redirect user to the next step&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Syncing profiles using the Xata SDK
&lt;/h2&gt;

&lt;p&gt;After you have generated blurred images, the last step in publishing profiles (similar to what’s done in publishing photographs) is to create a user record with relevant details using the Xata TypeScript SDK’s &lt;code&gt;create&lt;/code&gt; command. In Astro, we then set the successful message for the user before they are redirected to the photograph upload page. The integrated conditional rendering ensures a visual cue of the operation's success or failure, providing a responsive and user-friendly experience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;// File: src/pages/profile/create.astro&lt;/span&gt;

&lt;span class="c1"&gt;// Import the Xata Client created by Xata CLI in src/xata.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;form&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Astro&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// ...&lt;/span&gt;

    &lt;span class="c1"&gt;// Create the user record with the slug&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;profiles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userSlug&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userImage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userImageW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userImageH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;imageHash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userBlurHash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c1"&gt;// Send the user to photograph upload page&lt;/span&gt;
  &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;redirect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/photograph/create&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

  &lt;span class="c1"&gt;// Set the relevant message for the user&lt;/span&gt;
  &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Published profile succesfully. Redirecting you to upload your first photograph...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="nx"&gt;Render&lt;/span&gt; &lt;span class="nx"&gt;conditional&lt;/span&gt; &lt;span class="nx"&gt;states&lt;/span&gt; &lt;span class="nx"&gt;using&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;onboarded&lt;/span&gt; &lt;span class="nx"&gt;flag&lt;/span&gt; &lt;span class="o"&gt;--&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;created&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rounded bg-green-100 px-3 py-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rounded bg-red-100 px-3 py-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="nx"&gt;Profile&lt;/span&gt; &lt;span class="nx"&gt;Form&lt;/span&gt; &lt;span class="o"&gt;--&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="nx"&gt;Redirect&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="o"&gt;--&amp;gt;&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="nx"&gt;define&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;vars&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;created&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;redirect&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;success&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;href&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/script&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Xata query
&lt;/h2&gt;

&lt;p&gt;The user profile page (&lt;code&gt;src/pages/[profile]/index.astro&lt;/code&gt;) leverages the Xata Client to dynamically fetch and display all the photographs (not paginated) for a specific user profile. To engage users visually as soon as they open the gallery, we use the stored &lt;code&gt;imageHash&lt;/code&gt; value as the background of the images (to be loaded). To prevent CLS, we use the stored &lt;code&gt;width&lt;/code&gt; and &lt;code&gt;height&lt;/code&gt; values to inform the browser of the expected dimensions of the images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;// File: src/pages/[profile]/index.astro&lt;/span&gt;

&lt;span class="c1"&gt;// Import the Xata Client created by Xata CLI in src/xata.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Layout&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/layouts/Layout.astro&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="c1"&gt;// Get the profile slug from url path&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Astro&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;

&lt;span class="c1"&gt;// Fetch the Xata instance&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;// Get all the photographs related to the profile&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;profilePhotographs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;photographs&lt;/span&gt;
                                                        &lt;span class="c1"&gt;// Filter the results to the specific profile&lt;/span&gt;
                                                        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;profile-slug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
                                                        &lt;span class="c1"&gt;// Get all the photographs&lt;/span&gt;
                                                        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getAll&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Layout&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;flex flex-col&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;columns-1 gap-0 md:columns-2 lg:columns-3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;profilePhotographs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoImageURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;tagline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoTagline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;imageHash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoImageHash&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Destructure the width and height to prevent CLS */&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;
            &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;photoW&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;photoH&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;photoName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;photoImageURL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
                        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Do not lazy load the first image that's loaded into the DOM */&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eager&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lazy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="kd"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;transform bg-cover bg-center bg-no-repeat will-change-auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
                        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Create a blur effect with the imageHash stored */&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;`background-image: url(&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;photoImageHash&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;); transform: translate3d(0px, 0px, 0px);`&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Layout&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm7igjlt8jnwwvkzsldm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm7igjlt8jnwwvkzsldm.png" alt="(Image gallery - presenting blur images initially)" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9pl4fg9oa42bum8bce6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9pl4fg9oa42bum8bce6.png" alt="(Image gallery - after image loads complete)" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to Cloudflare Pages
&lt;/h2&gt;

&lt;p&gt;The repository is ready to deploy to Cloudflare. Follow the steps below to deploy seamlessly with Cloudflare 👇🏻&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a GitHub repository with the app code.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create application&lt;/strong&gt; in the Workers &amp;amp; Pages section of Cloudflare dashboard.&lt;/li&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;Pages&lt;/strong&gt; tab and select &lt;strong&gt;Connect to Git&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Link the created GitHub repository` as your new project.&lt;/li&gt;
&lt;li&gt;Scroll down and update the &lt;strong&gt;Framework preset&lt;/strong&gt; to &lt;strong&gt;Astro&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Update the environment variables from the &lt;code&gt;.env&lt;/code&gt; locally.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save and Deploy&lt;/strong&gt; and go back to the project &lt;strong&gt;Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Functions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;nodejs_compat&lt;/code&gt; to the &lt;strong&gt;Compatibility flags&lt;/strong&gt; section&lt;/li&gt;
&lt;li&gt;Deploy! 🚀&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Cloudflare Pages?
&lt;/h3&gt;

&lt;p&gt;Cloudflare Pages stood out for this particular use case as it &lt;a href="https://developers.cloudflare.com/workers/platform/limits/#request-limits" rel="noopener noreferrer"&gt;offers up to 100 MB request body size in their Free plan&lt;/a&gt;. This helped to bypass the 4.5MB request body size limit in various serverless hosting providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  More information
&lt;/h2&gt;

&lt;p&gt;For a more detailed insights, explore the references cited in this post.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Demo Image Gallery&lt;/th&gt;
&lt;th&gt;&lt;a href="https://image-gallery-cms-with-astro-xata-cloudflare.pages.dev/rishi" rel="noopener noreferrer"&gt;https://image-gallery-cms-with-astro-xata-cloudflare.pages.dev/rishi&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Repo&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/rishi-raj-jain/image-gallery-cms-with-astro-xata-cloudflare" rel="noopener noreferrer"&gt;https://github.com/rishi-raj-jain/image-gallery-cms-with-astro-xata-cloudflare&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Astro with Xata&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/getting-started/astro" rel="noopener noreferrer"&gt;https://xata.io/docs/getting-started/astro&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Astro View Transition Forms&lt;/td&gt;
&lt;td&gt;&lt;a href="https://docs.astro.build/en/guides/view-transitions/#transitions-with-forms" rel="noopener noreferrer"&gt;https://docs.astro.build/en/guides/view-transitions/#transitions-with-forms&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xata File Attachments&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/sdk/file-attachments#upload-a-file-using-file-apis" rel="noopener noreferrer"&gt;https://xata.io/docs/sdk/file-attachments#upload-a-file-using-file-apis&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xata Transformations&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/sdk/image-transformations" rel="noopener noreferrer"&gt;https://xata.io/docs/sdk/image-transformations&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xata Get Records&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/sdk/get#the-typescript-sdk-functions-for-querying" rel="noopener noreferrer"&gt;https://xata.io/docs/sdk/get#the-typescript-sdk-functions-for-querying&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloudflare Workers Limits&lt;/td&gt;
&lt;td&gt;&lt;a href="https://developers.cloudflare.com/workers/platform/limits/#request-limits" rel="noopener noreferrer"&gt;https://developers.cloudflare.com/workers/platform/limits/#request-limits&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;We'd love to hear from you if you have any feedback on this tutorial, would like to know more about Xata, or if you'd like to contribute a community blog or tutorial. Reach out to us on &lt;a href="https://discord.com/invite/kvAcQKh7vm" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or join us on &lt;a href="https://twitter.com/xata" rel="noopener noreferrer"&gt;X | Twitter&lt;/a&gt;. Happy building 🦋&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Create your own content management system with Remix and Xata</title>
      <dc:creator>Cezzaine Zaher</dc:creator>
      <pubDate>Fri, 07 Jun 2024 14:37:44 +0000</pubDate>
      <link>https://forem.com/xata/create-your-own-content-management-system-with-remix-and-xata-2ac2</link>
      <guid>https://forem.com/xata/create-your-own-content-management-system-with-remix-and-xata-2ac2</guid>
      <description>&lt;p&gt;In this post, you'll create a content CMS using Xata, Remix, Novel, LiteLLM, and Vercel. You'll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up Xata&lt;/li&gt;
&lt;li&gt;Create a schema with different column types&lt;/li&gt;
&lt;li&gt;Handle forms in Remix using Form Actions&lt;/li&gt;
&lt;li&gt;Implement Client Side Image Uploads&lt;/li&gt;
&lt;li&gt;Use an AI-powered WYSIWYG Editor&lt;/li&gt;
&lt;li&gt;Implement content-wide search&lt;/li&gt;
&lt;li&gt;Create dynamic content routes with Remix&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://xata.io/" rel="noopener noreferrer"&gt;Xata&lt;/a&gt; account&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/blog/announcements/v18-release-announce" rel="noopener noreferrer"&gt;Node.js 18&lt;/a&gt; or later&lt;/li&gt;
&lt;li&gt;An &lt;a href="https://platform.openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; account&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; Account&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://xata.io" rel="noopener noreferrer"&gt;Xata&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Serverless database platform for scalable, real-time applications.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://remix.run" rel="noopener noreferrer"&gt;Remix&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Framework for building full-stack web applications with focus on Web Standards.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/BerriAI/litellm" rel="noopener noreferrer"&gt;litelln&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Call all LLM APIs using the OpenAI format.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://novel.sh" rel="noopener noreferrer"&gt;Nove&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A Notion-style WYSIWYG editor with AI-powered autocompletion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;TailwindCSS&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;CSS framework for building custom designs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A cloud platform for deploying and scaling web applications.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Setting up a Xata Database
&lt;/h2&gt;

&lt;p&gt;After you've created a Xata account and are logged in, create a database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesaibyr2zuh9zn2hrnmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesaibyr2zuh9zn2hrnmd.png" alt="Create a database" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to create a table, in this instance &lt;code&gt;uploads&lt;/code&gt;, that contains all the uploaded images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9o7t9n9cuwwit9t0x8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9o7t9n9cuwwit9t0x8h.png" alt="Create uploads table" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great, now click on &lt;strong&gt;Schema&lt;/strong&gt; in the left sidebar and create one more table &lt;code&gt;content&lt;/code&gt;. You can do this by clicking &lt;strong&gt;Add a table&lt;/strong&gt;. The tables will contain user content and user uploaded photographs. With that completed, you will see the schema as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69a7erzy6k9d6tagnm0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69a7erzy6k9d6tagnm0f.png" alt="View uploads and content schema" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s move on to adding relevant columns in the tables you've just created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Schema
&lt;/h2&gt;

&lt;p&gt;In the &lt;code&gt;uploads&lt;/code&gt; table, you want to store all the images only (and no other attributes) so that you can create references to the same image object again, if needed.&lt;/p&gt;

&lt;p&gt;Proceed with adding the column named &lt;code&gt;image&lt;/code&gt;. This column is responsible for storing the &lt;code&gt;file&lt;/code&gt; type objects. In our case, the &lt;code&gt;file&lt;/code&gt; type object is for images, but you can use this for storing any kind of blob (e.g. PDF, fonts, etc.) that’s sized up to 1 GB.&lt;/p&gt;

&lt;p&gt;First, click &lt;strong&gt;+ Add column&lt;/strong&gt; and select &lt;strong&gt;File&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7erl1ids1eyhz2kw20o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7erl1ids1eyhz2kw20o.png" alt="Add a column" width="681" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the column name to &lt;code&gt;photo&lt;/code&gt; and to make files public (so that they can be shown to users when they visit the image gallery), check the &lt;strong&gt;Make files public by default&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F816p851prujdnvkiyzcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F816p851prujdnvkiyzcq.png" alt="Make files public by default" width="675" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;content&lt;/code&gt; table, we want to store the attributes such as content’s unique slug (the path of the url where content will be displayed), title, author name, author’s image with it’s dimensions, and content’s og image with it’s dimensions.&lt;/p&gt;

&lt;p&gt;Proceed with adding the column named &lt;code&gt;slug&lt;/code&gt;. It is responsible for maintaining the uniqueness of each content that gets created. Click &lt;strong&gt;+ Add a column&lt;/strong&gt;, select &lt;code&gt;String&lt;/code&gt; type and enter the column name as &lt;code&gt;slug&lt;/code&gt;. To associate a slug with only one content, check the &lt;code&gt;Unique&lt;/code&gt; attribute to make sure that duplicate entries do not get inserted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxmwf3skyyn5y366mjcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxmwf3skyyn5y366mjcy.png" alt="Add slug column" width="626" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In similar fashion, create &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;author_name&lt;/code&gt;, &lt;code&gt;author_image_url&lt;/code&gt;, &lt;code&gt;og_image_url&lt;/code&gt;, &lt;code&gt;author_image_w&lt;/code&gt;, &lt;code&gt;author_image_h&lt;/code&gt;, &lt;code&gt;og_image_w&lt;/code&gt;, &lt;code&gt;og_image_h&lt;/code&gt; as &lt;code&gt;String&lt;/code&gt; type (but not &lt;code&gt;Unique&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Great, you can also store the user &lt;code&gt;content&lt;/code&gt; as &lt;code&gt;Text&lt;/code&gt; type. While &lt;code&gt;String&lt;/code&gt; is a great default type, storing more than 2048 characters would require you to switch to the &lt;code&gt;Text&lt;/code&gt; type. Read more about the limits in &lt;a href="https://xata.io/docs/rest-api/limits#column-limits" rel="noopener noreferrer"&gt;Xata Column limits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Lovely! With all that done, the final schema shall be as below 👇🏻&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1a8uyqla3sb07w9p7j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1a8uyqla3sb07w9p7j2.png" alt="Add more CMS columns" width="800" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the project
&lt;/h2&gt;

&lt;p&gt;Clone the app repository and follow this tutorial; you can fork the project by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/rishi-raj-jain/remix-wysiwyg-litellm-xata-vercel
&lt;span class="nb"&gt;cd &lt;/span&gt;remix-wysiwyg-litellm-xata-vercel
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure Xata with Remix
&lt;/h2&gt;

&lt;p&gt;To seamlessly use Xata with Remix, install the Xata CLI globally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @xata.io/cli &lt;span class="nt"&gt;-g&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, authorize the Xata CLI so it is associated with the logged in account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xata auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3py0duhg6qb4gvtme4yv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3py0duhg6qb4gvtme4yv.png" alt="Create new API key" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Now, initialize your project locally with the Xata CLI command. In this command, you will need to use the database URL for the database that you just created. You can copy the URL from the Settings page of the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xata init &lt;span class="nt"&gt;--db&lt;/span&gt; https://Rishi-Raj-Jain-s-workspace-80514q.us-east-1.xata.sh/db/remix-wysiwyg-litellm-xata-vercel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Answer some quick one-time questions from the CLI to integrate with Remix.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv1mgyhybu1gpankmrap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv1mgyhybu1gpankmrap.png" alt="Xata CLI" width="582" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing form actions in Remix
&lt;/h2&gt;

&lt;p&gt;With Remix, &lt;a href="https://reactrouter.com/en/main/route/action" rel="noopener noreferrer"&gt;Route Actions&lt;/a&gt; are the way to process form &lt;code&gt;POST&lt;/code&gt; request(s). Here’s how we’ve enabled form actions to process the form submissions and insert records into the Xata database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Form&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;redirect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Get the form data&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt; &lt;span class="na"&gt;navigate&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"mt-8 flex flex-col"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="nx"&gt;elements&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to colocate the serverless backend and frontend flow for a given page in Remix. Say, you accept a form submission containing the title, slug, and the content’s HTML, process it on the server, and sync it with your Xata serverless database. Here’s how you’d do all of that in a single Remix route (&lt;code&gt;app/routes/_index.tsx&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/_index.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Editor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;novel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Form&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata.server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Upload&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/components/Utility/Upload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;redirect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Import the Xata Client created by the Xata CLI in app/xata.server.ts&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// Get the form data&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;slug&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;slug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;content-html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// Sync the attributes to the content table in Xata&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt; &lt;span class="na"&gt;navigate&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;New Article&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Title&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="na"&gt;autoComplete&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"off"&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"title"&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"title"&lt;/span&gt; &lt;span class="na"&gt;placeholder&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Title"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Content&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"content-html"&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"content-html"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Slug&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="na"&gt;autoComplete&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"off"&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"slug"&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"slug"&lt;/span&gt; &lt;span class="na"&gt;placeholder&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Slug"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Publish &lt;span class="ni"&gt;&amp;amp;rarr;&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handling Client Side Image Uploads with Xata
&lt;/h2&gt;

&lt;p&gt;To let user add their own custom OG Image with the content, we use Xata &lt;a href="https://xata.io/docs/sdk/file-attachments#upload-urls" rel="noopener noreferrer"&gt;Upload URLs&lt;/a&gt; to handle image uploads on the client side. There are 2 steps to make a successful client side image upload with Xata and Remix:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a record with empty photo &lt;code&gt;base64Content&lt;/code&gt; and obtain the photo’s &lt;strong&gt;uploadUrl&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/api_.image.upload.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;json&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata.server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// Use the Xata client to create a new 'photo' record with an empty base64 content&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uploads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;base64Content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;photo.uploadUrl&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;uploadUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;uploadUrl&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Do a client side &lt;code&gt;PUT&lt;/code&gt; request to the &lt;strong&gt;uploadUrl&lt;/strong&gt; with body as image’s buffer.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/components/Utility/Upload.tsx&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uploadFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChangeEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;HTMLInputElement&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Get the reference to the file uploaded&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;?.[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FileReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Load the file buffer&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Create blob from the file data with the relevant file's type&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Blob&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;fileData&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="c1"&gt;// Make a fetch to the get the uploadUrl&lt;/span&gt;
      &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/image/upload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="c1"&gt;// Use the uploadUrl to upload the buffer&lt;/span&gt;
          &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uploadUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PUT&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
          &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="c1"&gt;// Read the user uploaded file as buffer&lt;/span&gt;
  &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readAsArrayBuffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using an AI powered WYSIWYG Editor
&lt;/h2&gt;

&lt;p&gt;For making it easier to write content, users need a reliable and user-friendly AI powered WYSIWYG editor. We’re using Novel, a Notion-Style WYSIWYG Editor providing a seamless experience with intuitive features and real-time preview of the content being written. To get the content being written as HTML, we use Novel’s &lt;code&gt;onUpdate&lt;/code&gt; callback and set the HTML string to an input inside the form element.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/_index.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Editor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;novel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Form&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt; &lt;span class="na"&gt;navigate&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Content&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"content-html"&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"content-html"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Editor&lt;/span&gt;
        &lt;span class="na"&gt;defaultValue&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;storageKey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"novel__editor"&lt;/span&gt;
        &lt;span class="na"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tmp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getHTML&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;htmlSelector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;content-html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;htmlSelector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;htmlSelector&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setAttribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;value&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Publish &lt;span class="ni"&gt;&amp;amp;rarr;&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementing autocompletion using LiteLLM
&lt;/h2&gt;

&lt;p&gt;Under the hood, Novel makes a POST request to &lt;code&gt;/api/generate&lt;/code&gt; expecting a stream of tokens from OpenAI API. Well, let’s see how we’ve customised the endpoint to get the flexibility of using any AI API provider with LiteLLM. With LiteLLM, you can call 100+ LLMs with the same OpenAI-like input and output. To implement autocompletion with streaming, we use the &lt;code&gt;completion&lt;/code&gt; method with &lt;code&gt;stream&lt;/code&gt; flag set to &lt;code&gt;true&lt;/code&gt; and further return the response obtained as a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream" rel="noopener noreferrer"&gt;ReadableStream&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/api_.generate.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;completion&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;litellm&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;encoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;n&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;top_p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;presence_penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;system&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are an AI writing assistant that continues existing text based on context from prior text. &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Give more weight/priority to the later characters than the beginning ones. &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Limit your response to no more than 200 characters, but make sure to construct complete sentences.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="c1"&gt;// we're disabling markdown for now until we can figure out a way to stream markdown text with proper formatting: https://github.com/steven-tey/novel/discussions/7&lt;/span&gt;
        &lt;span class="c1"&gt;// "Use Markdown formatting when appropriate.",&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="c1"&gt;// Create a streaming response&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customReadable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReadableStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tmp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;part&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]?.&lt;/span&gt;&lt;span class="nx"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;encoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="c1"&gt;// Return the stream response and keep the connection alive&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;customReadable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Set the headers for Server-Sent Events (SSE)&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;keep-alive&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Encoding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Cache-Control&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no-cache, no-transform&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text/event-stream; charset=utf-8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Implementing Content Wide Search with Xata Search&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To let user search through the entire collection of the content, we use Remix Route Actions with Xata Search to retrieve relevant records from the database. With Xata Search, you can choose the tables to &lt;strong&gt;search through&lt;/strong&gt;, in this instance, &lt;code&gt;content&lt;/code&gt; and set the targets to &lt;strong&gt;search on&lt;/strong&gt;, in this instance, &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;slug&lt;/code&gt;, &lt;code&gt;content&lt;/code&gt; and &lt;code&gt;author_name&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/_index.tsx&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ActionFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;search&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;search&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// If the 'search' parameter is missing, redirect to '/content'&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Use the Xata client to perform a search across specified tables with fuzziness&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;tables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;slug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;author_name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;fuzziness&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// Extract the 'record' property from each search result containing the content&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;records&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating Dynamic Routes in Remix
&lt;/h2&gt;

&lt;p&gt;To create a page dynamically for each content, we're gonna use Remix Dynamic Routes and Route Loaders. Creating a page with &lt;strong&gt;$&lt;/strong&gt; in it, in this instance, content_.&lt;strong&gt;$id&lt;/strong&gt;.tsx specifies a dynamic route where each part of the URL for e.g. for &lt;code&gt;/content/a&lt;/code&gt;, &lt;code&gt;/content/b&lt;/code&gt; or &lt;code&gt;/content/anything&lt;/code&gt; captures the last segment into the &lt;strong&gt;id&lt;/strong&gt; param.&lt;/p&gt;

&lt;p&gt;With Remix Loader and Xata Records, we dynamically query the database to give us the content pertaining to a particular id. Once obtained, we process and return the content as HTML string. Finally, we use the loader data to prototype the UI with best practices such as lazy loading non-critical images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/routes/content_.$id.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getXataClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/xata.server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Image&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/components/Utility/Image&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useLoaderData&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;unescapeHTML&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/lib/util.server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getTransformedImage&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/lib/ast.server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LoaderFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;redirect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@remix-run/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;LoaderFunctionArgs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/404&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getXataClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// Use the Xata client to fetch content from the 'content' table based on the 'slug'&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;xata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getFirst&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getTransformedImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;unescapeHTML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// If content is not found, redirect to '/404'&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/404&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Pic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useLoaderData&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;loader&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Image&lt;/span&gt;
          &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author_image_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author_image_w&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author_image_h&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Image&lt;/span&gt;
        &lt;span class="na"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"eager"&lt;/span&gt;
        &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;og_image_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;og_image_w&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;og_image_h&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;dangerouslySetInnerHTML&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;__html&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy to Vercel
&lt;/h2&gt;

&lt;p&gt;The repository, is now ready to deploy to Vercel. Use the following steps to deploy: 👇🏻&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start by creating a GitHub repository containing your app's code.&lt;/li&gt;
&lt;li&gt;Then, navigate to the Vercel Dashboard and create a &lt;strong&gt;New Project&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Link the new project to the GitHub repository you just created.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Settings&lt;/strong&gt;, update the &lt;em&gt;Environment Variables&lt;/em&gt; to match those in your local &lt;code&gt;.env&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Deploy! 🚀&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  More Information
&lt;/h2&gt;

&lt;p&gt;For more detailed insights, explore the references cited in this post.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Repo&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/rishi-raj-jain/remix-wysiwyg-litellm-xata-vercel" rel="noopener noreferrer"&gt;https://github.com/rishi-raj-jain/remix-wysiwyg-litellm-xata-vercel&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remix with Xata&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/getting-started/remix" rel="noopener noreferrer"&gt;https://xata.io/docs/getting-started/remix&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xata File Attachments&lt;/td&gt;
&lt;td&gt;&lt;a href="https://xata.io/docs/sdk/file-attachments#upload-a-file-using-file-apis" rel="noopener noreferrer"&gt;https://xata.io/docs/sdk/file-attachments#upload-a-file-using-file-apis&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remix Route Actions&lt;/td&gt;
&lt;td&gt;&lt;a href="https://remix.run/docs/en/main/discussion/data-flow#route-action" rel="noopener noreferrer"&gt;https://remix.run/docs/en/main/discussion/data-flow#route-action&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remix Route Loaders&lt;/td&gt;
&lt;td&gt;&lt;a href="https://remix.run/docs/en/main/discussion/data-flow#route-loader" rel="noopener noreferrer"&gt;https://remix.run/docs/en/main/discussion/data-flow#route-loader&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;We'd love to hear from you if you have any feedback on this tutorial, would like to know more about Xata, or if you'd like to contribute a community blog or tutorial. Reach out to us on &lt;a href="https://discord.com/invite/kvAcQKh7vm" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or join us on &lt;a href="https://twitter.com/xata" rel="noopener noreferrer"&gt;X | Twitter&lt;/a&gt;. Happy building 🦋&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
