<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: The Codeero Group</title>
    <description>The latest articles on Forem by The Codeero Group (@codeero).</description>
    <link>https://forem.com/codeero</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/codeero"/>
    <language>en</language>
    <item>
      <title>Migrating Plausible Data Between Instances (Servers)</title>
      <dc:creator>Julian Engel</dc:creator>
      <pubDate>Sun, 24 Mar 2024 20:51:05 +0000</pubDate>
      <link>https://forem.com/codeero/migrating-plausible-data-between-instances-30ng</link>
      <guid>https://forem.com/codeero/migrating-plausible-data-between-instances-30ng</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Over the weekend, our Caprover Instance (open-source heroku alternative) crashed inexplicably. After lots of digging, we found out that there was a corrupted file which prevented the swarm from starting. The error we got was: &lt;code&gt;can't initialize raft node: irreparable WAL error: wal: max entry size limit exceeded&lt;/code&gt;. This is the equivalent to the Windows Blue Screen of Death... This docker instance was beyond repair. We started to investigate and found out that we had a 40gb Click House volume from Plausible. &lt;br&gt;
It turned out that query logging had inflated our database size to 40GB, whereas the actual data was just 1GB. &lt;br&gt;
Important Note: This was fixed by the Plausible team a long time ago, however we supported (sponsored) and used the project right from the start, so our instance had the logging issue. To be able to save our data, we needed to move to a new Plausible install on a new server. (Migrating to Hetzner along the way).&lt;/p&gt;

&lt;p&gt;We determined that these were the needed steps: export data from ClickHouse and PostgreSQL, transfer it securely, and import it into a new environment. &lt;/p&gt;

&lt;p&gt;Easier said than done.... A weekend later, with  lots of Stack Overflow and GPT-4 help, we managed to migrate over without any data loss. The below is a tutorial we wrote for our internal knowledge base. &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Exporting Data&lt;/strong&gt;
&lt;/h3&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;A. Export from ClickHouse&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access ClickHouse Container:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;clickhouse-container-id&amp;gt; clickhouse-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check Database Size:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To list all databases and their sizes:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;formatReadableSize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes_on_disk&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;total_size_on_disk&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="k"&gt;system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes_on_disk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For detailed table sizes within a specific database:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;table_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;formatReadableSize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes_on_disk&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;size_on_disk&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="k"&gt;system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'your_database_name'&lt;/span&gt; &lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;table_name&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;size_on_disk&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;==Note: Here we found that the clickhouse database was the culprit and had over 30gb in query logs.== &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Export Individual Tables:&lt;/strong&gt;
For each table in your database, export it as a CSV file. Repeat this process for every table you wish to export:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   clickhouse-client &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SELECT * FROM your_database_name.your_table_name FORMAT CSV"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; your_table_name.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optionally, compress the CSV files into a single archive for convenience using &lt;code&gt;tar&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-czvf&lt;/span&gt; your_database_backup.tar.gz &lt;span class="k"&gt;*&lt;/span&gt;.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;B. Export from PostgreSQL&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Direct Export to Host:&lt;/strong&gt;
Use &lt;code&gt;docker exec&lt;/code&gt; to run &lt;code&gt;pg_dump&lt;/code&gt; within the PostgreSQL container, saving the output directly to the host machine:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &amp;lt;postgres_container_id&amp;gt; pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; postgres your_database_name &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /path/on/host/backupfile.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2:  Download via SSH :&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For us, the easiest way to grab the CSV files from ClickHouse and the sql file, was to copy them over using &lt;code&gt;scp&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;CSV Folder (Recursive):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   scp &lt;span class="nt"&gt;-r&lt;/span&gt; your_username@remote_host:/path/to/csv_folder /local/directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Single File:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   scp your_username@remote_host:/path/to/backupfile.sql /local/directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Creating A New Plausible Instance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With our data safely stored on our local machine, it was time to spin up a new instance of Plausible. For this, I recommend the official &lt;a href="https://github.com/plausible/community-edition/"&gt;Community Edition Guide from GitHub.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important Note: Do not create a new account. Once you're on the registration page, it's time to import our data. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Importing Data&lt;/strong&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A. Stop Plausible Docker Service&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It's a good idea to pause the main Plausible container here in order to avoid any type of data corruption. &lt;/p&gt;

&lt;h4&gt;
  
  
  B. Importing the Postgres DB
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access Postgres&lt;/strong&gt;:
First, access the PostgreSQL command line interface within your Docker container. Replace &lt;code&gt;&amp;lt;container_name_or_id&amp;gt;&lt;/code&gt; with the name or ID of your PostgreSQL container:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; psql &lt;span class="nt"&gt;-U&lt;/span&gt; postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Drop the Existing Database&lt;/strong&gt;
&lt;strong&gt;Warning:&lt;/strong&gt; Dropping a database will permanently delete it and all data contained within. Ensure you have backups if necessary.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From within the PostgreSQL CLI, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;plausible_db&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the database is being accessed by other users, you might encounter an error. To force the database to drop, you can disconnect all connected users by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;pg_terminate_backend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pg_stat_activity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_stat_activity&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;pg_stat_activity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'plausible_db'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;plausible_db&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, exit the PostgreSQL CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a New Database:&lt;/strong&gt;
Still, within the Docker container's shell, create a new database with the same name:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; createdb &lt;span class="nt"&gt;-U&lt;/span&gt; postgres plausible_db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Import the SQL File:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's import the SQL file into the newly created database. On your host machine, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /path/to/your/plausible_backup.sql | docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; psql &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-d&lt;/span&gt; plausible_db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;/path/to/your/plausible_backup.sql&lt;/code&gt; with the actual path to your SQL file. This command streams the SQL file into the &lt;code&gt;psql&lt;/code&gt; command running inside your Docker container, importing the data into your &lt;code&gt;plausible_db&lt;/code&gt; database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Notes &amp;amp;  Handling Issues :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that the SQL file contains the necessary commands to create tables and insert data. If it was generated by &lt;code&gt;pg_dump&lt;/code&gt;, it should be fine.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your SQL file is particularly large, the import process might take some time. Monitor the process and check for any errors in the output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Role Does Not Exist Error:&lt;/strong&gt; When importing into PostgreSQL we faced a "role does not exist" error. This was because the version we used had the user plausible, while the new one used postgres. Modify the SQL dump file (in any text editor) to replace &lt;code&gt;OWNER TO plausible&lt;/code&gt; with &lt;code&gt;OWNER TO postgres&lt;/code&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;C. Check ClickHouse DB and Structure&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Access the ClickHouse Client&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Initiate an interactive session with your ClickHouse container to access the ClickHouse client. Replace &lt;code&gt;&amp;lt;container_name_or_id&amp;gt;&lt;/code&gt; with your container's actual name or ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; clickhouse-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Select Your Database&lt;/strong&gt;
Switch to your target database to ensure subsequent commands apply to it:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;USE&lt;/span&gt; &lt;span class="n"&gt;plausible_events_db&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;** List All Tables**
Display all tables within your selected database:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Examine Table Structure&lt;/strong&gt;
For details and to check that nothing changed on a specific table's structure, use the &lt;code&gt;DESCRIBE TABLE&lt;/code&gt; command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DESCRIBE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or more succinctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DESC&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Query Table Data (Optional)&lt;/strong&gt;
To query data from a particular table, execute a &lt;code&gt;SELECT&lt;/code&gt; statement:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;D. Importing CSV Data into ClickHouse&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Importing CSV into Clickhouse is done by one simple command per CSV. &lt;/p&gt;

&lt;p&gt;You need to repeat this for every csv that you want to import. For us, these were the non-empty csv files from the export:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;events_v2&lt;/li&gt;
&lt;li&gt;events&lt;/li&gt;
&lt;li&gt;sessions_v2&lt;/li&gt;
&lt;li&gt;sessions&lt;/li&gt;
&lt;li&gt;ingest_counters&lt;/li&gt;
&lt;li&gt;schema_migrations
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"clickhouse-client --query=&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;INSERT INTO plausible_events_db.TABLENAME FORMAT CSV&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; &amp;lt; /path/in/container/CSVFILE.csv"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;container_name_or_id&amp;gt;&lt;/code&gt; and &lt;code&gt;TABLENAME&lt;/code&gt; and &lt;code&gt;CSVFILE&lt;/code&gt; with your container's name or ID and the CSV file's name and table. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Verifying Data Import&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Ensure your data was accurately imported by executing a few checks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Count Imported Rows&lt;/strong&gt;
Verify the total row count in the &lt;code&gt;events_v2&lt;/code&gt; table:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; clickhouse-client &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SELECT COUNT(*) FROM plausible_events_db.events_v2;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inspect Initial Rows&lt;/strong&gt;
Look at the first few rows to confirm the data appears as expected:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; clickhouse-client &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SELECT * FROM plausible_events_db.events_v2 LIMIT 10;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check for Specific Data&lt;/strong&gt;
If looking for particular data, such as a specific &lt;code&gt;event_id&lt;/code&gt;, tailor a query to verify its presence:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;container_name_or_id&amp;gt; clickhouse-client &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SELECT * FROM plausible_events_db.events_v2 WHERE event_id = 'expected_event_id' LIMIT 1;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Final Steps&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Restart Plausible Container:&lt;/strong&gt; After completing the imports, restart the Plausible container to initiate database connections and migrations. This can take 10-15 seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification:&lt;/strong&gt; Log in with your previous credentials to verify that all data has been successfully migrated.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I hope that this guide provided a structured approach to exporting, transferring, and importing database data for Plausible analytics, including troubleshooting common issues. Ensure you have backups and verify data integrity at each step to ensure a smooth transition.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>plausible</category>
      <category>docker</category>
      <category>caprover</category>
    </item>
  </channel>
</rss>
