<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andrew Kim | Dreamlit.ai</title>
    <description>The latest articles on Forem by Andrew Kim | Dreamlit.ai (@andrewk17).</description>
    <link>https://forem.com/andrewk17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/andrewk17"/>
    <language>en</language>
    <item>
      <title>Lovable Cloud to Your Own Supabase, Without the Password Resets</title>
      <dc:creator>Andrew Kim | Dreamlit.ai</dc:creator>
      <pubDate>Tue, 31 Mar 2026 19:01:48 +0000</pubDate>
      <link>https://forem.com/andrewk17/lovable-cloud-to-your-own-supabase-without-the-password-resets-20c6</link>
      <guid>https://forem.com/andrewk17/lovable-cloud-to-your-own-supabase-without-the-password-resets-20c6</guid>
      <description>&lt;p&gt;We're big fans of &lt;a href="https://lovable.dev" rel="noopener noreferrer"&gt;Lovable&lt;/a&gt;. It's genuinely one of the best tools for going from an idea to a working app, and a lot of our customers build on it.&lt;/p&gt;

&lt;p&gt;Their &lt;a href="https://docs.lovable.dev/features/deploy/lovable-cloud" rel="noopener noreferrer"&gt;Cloud&lt;/a&gt; feature takes things further by managing a Supabase instance for you. No database setup, no hosting headaches. It's a great experience right up until the moment you want to connect something external: Zapier, email automations, analytics that talk to Postgres directly.&lt;/p&gt;

&lt;p&gt;That's when you realize you don't have the database keys.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.lovable.dev/tips-tricks/external-deployment-hosting#what-migrates-and-how" rel="noopener noreferrer"&gt;official migration path&lt;/a&gt; off Cloud is rough. CSV exports one table at a time. Storage files uploaded individually. &lt;strong&gt;Every user has to reset their password.&lt;/strong&gt; If you have real users, that last part is a non-starter.&lt;/p&gt;

&lt;p&gt;We built an &lt;a href="https://github.com/dreamlit-ai/lovable-cloud-to-supabase-exporter" rel="noopener noreferrer"&gt;open-source exporter&lt;/a&gt; that handles the whole thing. Here's what we learned building it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The approach
&lt;/h2&gt;

&lt;p&gt;Both sides are Supabase, which means both sides are Postgres. That's the key insight: we don't need a custom migration format or intermediary API. Native Postgres tooling (&lt;code&gt;pg_dump&lt;/code&gt; and &lt;code&gt;psql&lt;/code&gt;) can move the data. The hard parts are everything Supabase layers on top.&lt;/p&gt;

&lt;p&gt;The migration breaks down into three problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Getting credentials&lt;/strong&gt; out of a locked-down Lovable Cloud project (you don't have direct DB access)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloning the database&lt;/strong&gt; without breaking Supabase's auth system, system schemas, or foreign key ordering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copying storage files&lt;/strong&gt; (avatars, uploads, assets) between Supabase Storage instances&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each one had a non-obvious solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting credentials out of a locked-down project
&lt;/h2&gt;

&lt;p&gt;Lovable Cloud doesn't expose your database URL or service role key directly.&lt;/p&gt;

&lt;p&gt;But here's the thing: it's still Supabase under the hood. And Supabase edge functions have access to environment variables like &lt;code&gt;SUPABASE_DB_URL&lt;/code&gt; and &lt;code&gt;SUPABASE_SERVICE_ROLE_KEY&lt;/code&gt;. Lovable Cloud doesn't block you from deploying edge functions, so we can use one as a credential bridge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// access-key check omitted for brevity (full version in repo)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonResponse&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;supabase_db_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SUPABASE_DB_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;service_role_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SUPABASE_SERVICE_ROLE_KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Deploy this as an edge function on the source project, and the exporter can connect. The full version includes access-key protection so the endpoint isn't open to anyone. After the migration, delete the edge function and rotate your secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema vs. data: why you can't just pg_dump everything
&lt;/h2&gt;

&lt;p&gt;The clone happens in four stages (all running inside a Docker container built on &lt;code&gt;postgres:17-alpine&lt;/code&gt;, so users don't need Postgres tooling installed locally):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dump_schema → restore_schema → dump_data → restore_data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Schema first, because the target needs table definitions before it can accept rows. But you can't dump the schema as-is. Every Supabase project comes with a &lt;code&gt;public&lt;/code&gt; schema and a standard comment on it. Restoring those into a fresh Supabase project causes conflicts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rawSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawSchemaPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;utf8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;filteredSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;rawSchema&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;line&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="nx"&gt;line&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CREATE SCHEMA public;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;COMMENT ON SCHEMA public IS &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For data, we dump &lt;code&gt;public&lt;/code&gt; and &lt;code&gt;auth&lt;/code&gt; schemas together but skip transient, system-managed tables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;EXCLUDED_TABLES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.schema_migrations&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;storage.migrations&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supabase_functions.migrations&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.sessions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.refresh_tokens&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.flow_state&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.one_time_tokens&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth.audit_log_entries&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;auth.users&lt;/code&gt; table is &lt;em&gt;not&lt;/em&gt; excluded. That's the key to migrating without password resets. Supabase stores hashed passwords in &lt;code&gt;auth.users&lt;/code&gt;. Moving the row moves the hash. Users log in on the new instance with their existing password.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming data through a FIFO pipe
&lt;/h2&gt;

&lt;p&gt;The obvious approach for the data stage: &lt;code&gt;pg_dump&lt;/code&gt; to a file, then &lt;code&gt;psql&lt;/code&gt; from that file. This works until someone's database is larger than the 2GB disk limit on Cloudflare containers (where the hosted version runs).&lt;/p&gt;

&lt;p&gt;Instead, the clone script creates a named FIFO (first-in-first-out) pipe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkfifo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FIFO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
pg_dump &lt;span class="nv"&gt;$SOURCE_DB_URL&lt;/span&gt; &lt;span class="nt"&gt;--data-only&lt;/span&gt; &lt;span class="nt"&gt;--schema&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="nt"&gt;--schema&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auth &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--exclude-table&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;... &lt;span class="nt"&gt;--no-owner&lt;/span&gt; &lt;span class="nt"&gt;--no-acl&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FIFO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;
&lt;span class="nv"&gt;DUMP_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;

psql &lt;span class="nv"&gt;$TARGET_DB_URL&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;ON_ERROR_STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FIFO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;
&lt;span class="nv"&gt;RESTORE_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;

&lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nv"&gt;$DUMP_PID&lt;/span&gt;
&lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nv"&gt;$RESTORE_PID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Data flows from source to target without ever fully materializing on disk. The container's storage requirements stay constant regardless of database size.&lt;/p&gt;

&lt;p&gt;One subtlety: the target needs to accept rows in whatever order &lt;code&gt;pg_dump&lt;/code&gt; produces them, even if foreign key constraints exist between tables. The script sets &lt;code&gt;session_replication_role=replica&lt;/code&gt; on the target connection, disabling triggers and FK checks for the duration of the import. Once &lt;code&gt;psql&lt;/code&gt; finishes, the mode resets and all constraints are re-validated.&lt;/p&gt;

&lt;h2&gt;
  
  
  There's no "copy all storage" API
&lt;/h2&gt;

&lt;p&gt;Database migration gets all the attention, but most Lovable apps also store files in Supabase Storage: avatars, uploads, assets. If you only move the database, your app loads and every image is a 404.&lt;/p&gt;

&lt;p&gt;Supabase doesn't have a "copy all buckets to another project" endpoint. You have to list every bucket, recreate each one on the target (with matching settings: public/private, file size limits, allowed MIME types), then download and re-upload every object individually. Lovable Cloud storage sometimes has orphaned references (rows in the DB pointing to files that no longer exist).&lt;/p&gt;

&lt;p&gt;The exporter walks the source buckets, recreates them on the target, and copies objects in parallel. When a file doesn't actually exist on the source (orphaned reference), the copy returns early instead of failing the whole migration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isMissingStorageObjectResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lowered&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lowered&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;"error":"not_found"&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lowered&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;object not found&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// in copyOneObject:&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;isMissingStorageObjectResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;downloadResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;errorBody&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;skipped_missing&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each object ends up as &lt;code&gt;"copied"&lt;/code&gt;, &lt;code&gt;"skipped_missing"&lt;/code&gt;, or &lt;code&gt;"skipped_existing"&lt;/code&gt;. The summary tells you exactly what happened across the whole migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotchas we hit along the way
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Permission checks before any writes.&lt;/strong&gt; Early versions would fail mid-migration when the target lacked INSERT permissions on certain tables. The exporter now pre-checks both SELECT on source and INSERT on target for every table before starting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classified failure modes.&lt;/strong&gt; A migration can fail for a lot of reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source DB is unreachable&lt;/li&gt;
&lt;li&gt;Target database isn't empty&lt;/li&gt;
&lt;li&gt;Target is missing required permissions&lt;/li&gt;
&lt;li&gt;Credentials are wrong or expired&lt;/li&gt;
&lt;li&gt;Storage buckets can't be created&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each maps to a specific exit code with a human-readable hint so you know exactly what to fix. (All log output is sanitized to strip database passwords and service role keys before writing to stdout.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Failures and resumability without state
&lt;/h2&gt;

&lt;p&gt;The hosted version runs on Cloudflare containers, and we picked containers specifically because they're ephemeral. Credentials pass through an isolated environment that gets destroyed after the migration. Nothing persists. That's a feature when you're handling someone's database URL and service role key, but it rules out saving job progress to disk and resuming where you left off.&lt;/p&gt;

&lt;p&gt;Instead, the exporter treats each run as atomic. We require the target to be blank before starting, so if the database clone fails, you create a fresh Supabase project and run it again. No half-migrated state to untangle.&lt;/p&gt;

&lt;p&gt;Storage is more best-effort by design. Files that fail to copy get skipped and logged rather than killing the whole migration. If you re-run, the copy engine scans the target first and skips anything that already made it across:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// before copying, collect what's already on the target&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;existingTargetObjectPaths&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;collectExistingObjectPaths&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;targetProjectUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetAdminKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bucketId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// then in the copy loop:&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;existingTargetObjectPaths&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fullPath&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;skipped_existing&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not a checkpoint system, but in practice it gets you there.&lt;/p&gt;

&lt;h2&gt;
  
  
  After the migration
&lt;/h2&gt;

&lt;p&gt;Once your data is in your own Supabase, you have direct Postgres access. That's the whole point.&lt;/p&gt;

&lt;p&gt;Some things we've seen people connect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zapier / Make&lt;/strong&gt; for workflow automations triggered by database changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dreamlit.ai" rel="noopener noreferrer"&gt;Dreamlit&lt;/a&gt;&lt;/strong&gt; for database-driven transactional emails (connects to your Postgres, no API code)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics and monitoring&lt;/strong&gt; that query Postgres directly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom edge functions&lt;/strong&gt; that need the service role key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't have to leave Lovable to build, either. Connect your own Supabase to a new Lovable project and your workflow stays the same. The difference is you own the infrastructure and can plug in whatever tools make sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hosted (no setup):&lt;/strong&gt; &lt;a href="https://dreamlit.ai/tools/lovable-cloud-to-supabase-exporter" rel="noopener noreferrer"&gt;dreamlit.ai/tools/lovable-cloud-to-supabase-exporter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run it locally:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/dreamlit-ai/lovable-cloud-to-supabase-exporter
&lt;span class="nb"&gt;cd &lt;/span&gt;lovable-cloud-to-supabase-exporter
pnpm &lt;span class="nb"&gt;install
&lt;/span&gt;pnpm exporter &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;export &lt;/span&gt;run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-edge-function-url&lt;/span&gt; &amp;lt;your-edge-function-url&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-edge-function-access-key&lt;/span&gt; &amp;lt;your-access-key&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target-db-url&lt;/span&gt; &amp;lt;your-supabase-db-url&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target-project-url&lt;/span&gt; &amp;lt;your-supabase-project-url&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target-admin-key&lt;/span&gt; &amp;lt;your-service-role-key&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--confirm-target-blank&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://github.com/dreamlit-ai/lovable-cloud-to-supabase-exporter" rel="noopener noreferrer"&gt;full source is on GitHub&lt;/a&gt;. MIT licensed.&lt;/p&gt;

&lt;p&gt;If you've migrated off Lovable Cloud (or want to), we'd love to hear what you're building on your own Supabase. Drop a comment or find us on &lt;a href="https://x.com/DreamlitAI" rel="noopener noreferrer"&gt;X&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>lovable</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
