<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rajeshwar R</title>
    <description>The latest articles on Forem by Rajeshwar R (@r4jeshwar).</description>
    <link>https://forem.com/r4jeshwar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/r4jeshwar"/>
    <language>en</language>
    <item>
      <title>Building Advanced Search with PostgreSQL: pg_search on AWS</title>
      <dc:creator>Rajeshwar R</dc:creator>
      <pubDate>Sun, 09 Nov 2025 07:44:46 +0000</pubDate>
      <link>https://forem.com/r4jeshwar/building-advanced-search-with-postgresql-pgsearch-on-aws-50e1</link>
      <guid>https://forem.com/r4jeshwar/building-advanced-search-with-postgresql-pgsearch-on-aws-50e1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Ever needed powerful search capabilities in your application but dreaded the complexity of managing Elasticsearch or OpenSearch? You're not alone. While these dedicated search engines are powerful, they introduce significant operational overhead: another cluster to deploy, ETL pipelines to maintain, and constant synchronization headaches.&lt;/p&gt;

&lt;p&gt;What if you could get search engine capabilities directly inside PostgreSQL? That's exactly what &lt;code&gt;pg_search&lt;/code&gt; offers. In this post, I'll walk you through what &lt;code&gt;pg_search&lt;/code&gt; is, why it can't run on Amazon RDS, and how we architected a solution using EC2 that keeps everything in PostgreSQL while delivering modern search features.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is pg_search?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pg_search&lt;/code&gt; is a PostgreSQL extension from ParadeDB that transforms your database into a full-featured search engine. Instead of basic substring matching with &lt;code&gt;LIKE&lt;/code&gt; or &lt;code&gt;ILIKE&lt;/code&gt;, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full-text search&lt;/strong&gt; with BM25 ranking (the algorithm behind modern search engines)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector search&lt;/strong&gt; for semantic similarity using embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid search&lt;/strong&gt; that combines text relevance and vector similarity&lt;/li&gt;
&lt;li&gt;All using standard SQL queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
       &lt;span class="n"&gt;bm25_rank&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s1"&gt;'configure postgres search'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;articles&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns ranked results like a real search engine, not just rows that contain your keywords.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why pg_search Instead of Traditional Search Solutions?
&lt;/h2&gt;

&lt;p&gt;You might reach for &lt;code&gt;pg_search&lt;/code&gt; when:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plain Postgres queries aren't enough.&lt;/strong&gt; Basic &lt;code&gt;LIKE&lt;/code&gt; queries don't provide relevance ranking, and scaling text search across multiple columns becomes difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You want to avoid separate infrastructure.&lt;/strong&gt; Solutions like Elasticsearch require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Additional clusters to deploy and monitor&lt;/li&gt;
&lt;li&gt;ETL pipelines to sync data from Postgres&lt;/li&gt;
&lt;li&gt;Managing consistency between two systems&lt;/li&gt;
&lt;li&gt;Extra latency from network hops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You prefer the Postgres ecosystem.&lt;/strong&gt; With &lt;code&gt;pg_search&lt;/code&gt;, everything stays in Postgres. Your backups, permissions, transactions, and tooling all work the same way. No new stack to learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  The RDS Challenge
&lt;/h2&gt;

&lt;p&gt;Here's the catch: &lt;strong&gt;Amazon RDS doesn't support pg_search&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;RDS only allows a pre-approved list of extensions. Since RDS is a managed service, AWS controls what can be loaded into the PostgreSQL process. If you try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;pg_search&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: extension "pg_search" is not available
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This limitation is by design. To use &lt;code&gt;pg_search&lt;/code&gt;, you need a self-managed PostgreSQL instance where you control the installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Solution: RDS + EC2 Architecture
&lt;/h2&gt;

&lt;p&gt;Since we can't install &lt;code&gt;pg_search&lt;/code&gt; on RDS, we use a hybrid approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon RDS&lt;/strong&gt; remains our primary database for all application writes and transactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 PostgreSQL&lt;/strong&gt; runs as a search replica with &lt;code&gt;pg_search&lt;/code&gt; installed&lt;/li&gt;
&lt;li&gt;Data flows from RDS to EC2 using logical replication&lt;/li&gt;
&lt;li&gt;Applications query RDS for normal operations and EC2 for search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          ┌─────────────────────────────┐
          │         Application         │
          │  (API / backend / service)  │
          └────────────┬────────────────┘
                       │
                 Write / Read 
                       │
          ┌────────────▼──────────────┐
          │   Amazon RDS PostgreSQL   │
          │        (Primary DB)       │
          └────────────┬──────────────┘
                       │
                Logical Replication
                       │
          ┌────────────▼──────────────┐
          │        Amazon EC2         │
          │  PostgreSQL + pg_search   │
          │    (Search / Analytics)   │
          └────────────▲──────────────┘
                       │
                Search Queries
                       │
          ┌────────────┴──────────────┐
          │         Application       │
          └───────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything stays inside your private AWS VPC with no external services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up pg_search on EC2
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Provision Your EC2 Instance
&lt;/h3&gt;

&lt;p&gt;Launch an EC2 instance (t3.medium or larger) in the same VPC as your RDS instance. Place it in a private subnet and attach sufficient storage (100+ GB gp3 EBS).&lt;/p&gt;

&lt;p&gt;Configure security groups to allow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 → RDS communication on port 5432&lt;/li&gt;
&lt;li&gt;Your admin access via SSH&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Install PostgreSQL with pg_search
&lt;/h3&gt;

&lt;p&gt;We'll use Docker for a reproducible setup. Create a &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; postgres:16&lt;/span&gt;

&lt;span class="c"&gt;# Install build dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    build-essential git curl pkg-config libssl-dev &lt;span class="se"&gt;\
&lt;/span&gt;    libclang-dev clang postgresql-server-dev-16 &lt;span class="se"&gt;\
&lt;/span&gt;    libicu-dev wget ca-certificates

&lt;span class="c"&gt;# Install Rust toolchain&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; CARGO_HOME=/usr/local/cargo&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH=$CARGO_HOME/bin:$PATH&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;curl https://sh.rustup.rs &lt;span class="nt"&gt;-sSf&lt;/span&gt; | sh &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CARGO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/env"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cargo &lt;span class="nb"&gt;install &lt;/span&gt;cargo-pgrx &lt;span class="nt"&gt;--version&lt;/span&gt; 0.15.0 &lt;span class="nt"&gt;--locked&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cargo pgrx init &lt;span class="nt"&gt;--pg16&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;which pg_config&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Install pgvector (optional, for vector search)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;wget https://github.com/pgvector/pgvector/archive/refs/tags/v0.5.1.tar.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvzf&lt;/span&gt; v0.5.1.tar.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;cd &lt;/span&gt;pgvector-0.5.1 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;cd&lt;/span&gt; .. &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; pgvector-0.5.1 v0.5.1.tar.gz

&lt;span class="c"&gt;# Build and install pg_search&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;git clone https://github.com/paradedb/paradedb.git &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;cd &lt;/span&gt;paradedb/pg_search &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cargo pgrx &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;cd&lt;/span&gt; ../.. &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; paradedb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.8'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pgsearch-ec2&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your_strong_password&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/mnt/pgdata:/var/lib/postgresql/data&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wal_level=logical"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_replication_slots=10"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_wal_senders=10"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shared_preload_libraries=pg_search"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Enable the Extensions
&lt;/h3&gt;

&lt;p&gt;Connect to your EC2 PostgreSQL instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;psql &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-U&lt;/span&gt; postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create your database and enable extensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;app_db&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="n"&gt;app_db&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;pg_search&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- if using vector search&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create Search Indexes
&lt;/h3&gt;

&lt;p&gt;Now you can create powerful search indexes on your tables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- BM25 full-text search index&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;documents_search_idx&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;bm25&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key_field&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Vector similarity index (for semantic search)&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;documents_embedding_idx&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;ivfflat&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;vector_l2_ops&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lists&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Syncing Data from RDS to EC2
&lt;/h2&gt;

&lt;p&gt;Installing &lt;code&gt;pg_search&lt;/code&gt; is only half the solution. We need continuous data replication from RDS to EC2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Data Load
&lt;/h3&gt;

&lt;p&gt;First, take a snapshot of your RDS data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Dump from RDS&lt;/span&gt;
pg_dump &lt;span class="nt"&gt;-h&lt;/span&gt; your-rds-endpoint.amazonaws.com &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-U&lt;/span&gt; your_user &lt;span class="nt"&gt;-d&lt;/span&gt; app_db &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-Fc&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; app_db_dump.backup

&lt;span class="c"&gt;# Restore to EC2&lt;/span&gt;
psql &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"CREATE DATABASE app_db;"&lt;/span&gt;
pg_restore &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="nt"&gt;-d&lt;/span&gt; app_db app_db_dump.backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure RDS for Logical Replication
&lt;/h3&gt;

&lt;p&gt;In your RDS parameter group, set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rds.logical_replication = 1
max_replication_slots = 50 (or more, depending on your needs)
max_wal_senders = 10
max_slot_wal_keep_size = 2048 (MB — safety net to avoid WAL filling storage)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply changes and reboot your RDS instance.&lt;/p&gt;

&lt;p&gt;Create a replication user on RDS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;ROLE&lt;/span&gt; &lt;span class="n"&gt;repl_user&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;LOGIN&lt;/span&gt; &lt;span class="n"&gt;REPLICATION&lt;/span&gt; &lt;span class="n"&gt;PASSWORD&lt;/span&gt; &lt;span class="s1"&gt;'secure_password'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;CONNECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;app_db&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;repl_user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;USAGE&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;SCHEMA&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;repl_user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;ALL&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="k"&gt;SCHEMA&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;repl_user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a publication on RDS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Replicate all tables&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;PUBLICATION&lt;/span&gt; &lt;span class="n"&gt;app_pub&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;ALL&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Or specific tables only&lt;/span&gt;
&lt;span class="c1"&gt;-- CREATE PUBLICATION app_pub FOR TABLE documents, users;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set Up Subscription on EC2
&lt;/h3&gt;

&lt;p&gt;On your EC2 PostgreSQL instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;SUBSCRIPTION&lt;/span&gt; &lt;span class="n"&gt;app_sub&lt;/span&gt;
  &lt;span class="k"&gt;CONNECTION&lt;/span&gt; &lt;span class="s1"&gt;'host=your-rds-endpoint.amazonaws.com port=5432 
              user=repl_user password=secure_password dbname=app_db'&lt;/span&gt;
  &lt;span class="n"&gt;PUBLICATION&lt;/span&gt; &lt;span class="n"&gt;app_pub&lt;/span&gt;
  &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;slot_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app_slot&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;create_slot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;copy_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;-- data already loaded via pg_restore&lt;/span&gt;
    &lt;span class="n"&gt;enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify Replication
&lt;/h3&gt;

&lt;p&gt;Check replication status on EC2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;subname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;latest_end_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;latest_end_time&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;delay&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_stat_subscription&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test with a simple insert on RDS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gen_random_uuid&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="s1"&gt;'Test'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Replication check'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The row should appear on EC2 within seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Search Queries
&lt;/h2&gt;

&lt;p&gt;Now your application can run powerful search queries against the EC2 instance:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full-text search with ranking:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;bm25_rank&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s1"&gt;'postgres configuration'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vector similarity search:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&amp;gt;&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;similarity&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&amp;gt;&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your application connects to RDS for writes and normal reads, and to EC2 for search operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keeping New Tables in Sync&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default:&lt;/p&gt;

&lt;p&gt;If you used FOR ALL TABLES in your publication, new tables will be included on the publisher side, but…&lt;/p&gt;

&lt;p&gt;The subscriber needs to be refreshed to “see” them:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ALTER SUBSCRIPTION app_sub REFRESH PUBLICATION WITH (copy_data = true);&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;You can automate this with a cron job on EC2:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;*/15 * * * * psql -h localhost -U postgres -d app_db \&lt;br&gt;
  -c "ALTER SUBSCRIPTION app_sub REFRESH PUBLICATION WITH (copy_data = true);"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This ensures new tables created on RDS are picked up and synced to EC2 regularly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros and Cons
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;p&gt;✅ &lt;strong&gt;Advanced search features&lt;/strong&gt; – BM25 ranking, vector search, and hybrid search capabilities&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Postgres-native&lt;/strong&gt; – Use SQL and standard Postgres tooling instead of learning new systems&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;RDS stays primary&lt;/strong&gt; – Keep RDS for transactions, backups, and managed infrastructure&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Private network&lt;/strong&gt; – Everything stays inside your VPC with no external dependencies&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Flexible extensions&lt;/strong&gt; – Install any Postgres extension on EC2, not limited by RDS restrictions&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;❌ &lt;strong&gt;Additional infrastructure&lt;/strong&gt; – You now manage a self-hosted Postgres instance&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Replication complexity&lt;/strong&gt; – Setting up and maintaining logical replication requires care&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Near real-time search&lt;/strong&gt; – Search data lags RDS by a few seconds (acceptable for most use cases)&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Dual endpoints&lt;/strong&gt; – Application needs to connect to two different databases&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Higher costs&lt;/strong&gt; – Extra EC2 instance and storage expenses&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We chose this architecture because it gives us powerful search capabilities without the operational complexity of maintaining a separate search infrastructure. While managing an EC2 Postgres instance adds some overhead, it's significantly simpler than running Elasticsearch clusters and ETL pipelines.&lt;/p&gt;

&lt;p&gt;For teams already comfortable with PostgreSQL, this approach feels natural. Everything stays in the Postgres ecosystem, uses familiar SQL, and remains inside your private network. The trade-off of managing one additional Postgres node is worthwhile for the search capabilities you gain.&lt;/p&gt;

&lt;p&gt;If you're considering this architecture, start small: set up a test EC2 instance, replicate a subset of tables, and validate that the search performance meets your needs before committing to production deployment.&lt;/p&gt;

&lt;p&gt;Have questions about this setup? Feel free to reach out or leave a comment below.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>postgres</category>
      <category>aws</category>
    </item>
    <item>
      <title>A Step-by-Step Guide to Deploying an Application on AWS App Runner with GitHub Action workflow</title>
      <dc:creator>Rajeshwar R</dc:creator>
      <pubDate>Wed, 31 May 2023 17:11:07 +0000</pubDate>
      <link>https://forem.com/ittrident/a-step-by-step-guide-to-deploying-an-application-on-aws-app-runner-with-github-action-workflow-17ke</link>
      <guid>https://forem.com/ittrident/a-step-by-step-guide-to-deploying-an-application-on-aws-app-runner-with-github-action-workflow-17ke</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Privacy-Protect&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Share passwords and sensitive files over email or store them in insecure locations like cloud drives using nothing more than desktop or mobile web browsers like chrome and safari.&lt;/p&gt;

&lt;p&gt;No special software. No need to create an account. It's free, open-source, keeps your private data a secret, and leave you alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;App Runner&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AWS App Runner is a fully managed container application service that lets you build, deploy, and run containerized web application and API services without prior infrastructure or container experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cons of App Runner&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Lack of EFS Mount Support:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The absence of support for mounting the Elastic File System (EFS). Instead, AWS offers an &lt;a href="https://aws.amazon.com/blogs/containers/deploy-python-application-using-aws-app-runner/" rel="noopener noreferrer"&gt;alternative solution&lt;/a&gt; with DynamoDB for handling data storage. The lack of EFS mount support might be a challenge for certain use cases:Limited File Storage Flexibility, Potential Performance Impact, Additional Configuration Complexity. However, by leveraging AWS's alternative storage solution, DynamoDB, developers can work around this limitation and continue to build scalable applications.&lt;/p&gt;

&lt;p&gt;Go through this Architecture diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadvnxq1w8aecr5wo958p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadvnxq1w8aecr5wo958p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Create a IAM role&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;This role is used by AWS App runner to access AWS ECR docker images.&lt;/p&gt;

&lt;p&gt;The following are the step-by-step instructions to create a service role and associating &lt;code&gt;AWSAppRunnerServicePolicyForECRAccess&lt;/code&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;step 1&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Create a role named with &lt;code&gt;app-runner-service-role&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "build.apprunner.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;step 2&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Attach the &lt;code&gt;AWSAppRunnerServicePolicyForECRAccess&lt;/code&gt; existing policy to the role &lt;code&gt;app-runner-service-role&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here the steps to deploy Privacy-Protect application in App Runner using GitHub Action&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Create a AWS ECR private repostiory&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvyblvzdwkbm2gs6xh2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvyblvzdwkbm2gs6xh2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/r4jeshwar/privacy-protect.git
cd privacy-protect


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Migrate from Server deployment(Vercel-@sveltejs/adapter-vercel) to static site generator(@sveltejs/adapter-static)&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;First, Install &lt;code&gt;npm i -D @sveltejs/adapter-static&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Go to the &lt;code&gt;svelte.config.js&lt;/code&gt; file delete the existing content and paste the below content in &lt;code&gt;svelte.config.js&lt;/code&gt; file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import adapter from "@sveltejs/adapter-static";
import { vitePreprocess } from "@sveltejs/kit/vite";
import { mdsvex } from "mdsvex";
import { resolve } from "path";

/** @type {import('@sveltejs/kit').Config} */
export default {
  extensions: [".md", ".svelte"],
  kit: {
    adapter: adapter({
       pages: 'build',
       assets: 'build',
      fallback: null,
      precompress: false,
      strict: true
    }),
    alias: {
      $components: resolve("src/components"),
      $icons: resolve("src/assets/icons"),
    },
    csp: {
      directives: {
        "base-uri": ["none"],
        "default-src": ["self"],
        "frame-ancestors": ["none"],
        "img-src": ["self", "data:"],
        "object-src": ["none"],
        // See https://github.com/sveltejs/svelte/issues/6662
        "style-src": ["self", "unsafe-inline"],
        "upgrade-insecure-requests": true,
        "worker-src": ["none"],
      },
      mode: "auto",
    },
  },
  preprocess: [
    mdsvex({
      extensions: [".md"],
      layout: {
        blog: "src/routes/blog/post.svelte",
      },
    }),
    vitePreprocess(),
  ],
};


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: This changes is to deploy in various infrastructure and to dockerize in small size. If you need to deploy in vercel ignore this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Why we need to switch out the adapter from vercel to static ?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Adapter-Static allows deploying Vite.js application to various static hosting providers.It also simplifies the deployment process by eliminating the need for serverless infrastructure and configuration specific to Vercel. &lt;/p&gt;

&lt;p&gt;Overall, migrating from Adapter-Vercel to Adapter-Static empowers you with greater flexibility, simplicity.&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Dockerfile for privacy-protect to deploy in App Runner&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM node:18-alpine as build

WORKDIR /app

COPY . .

RUN npm i
RUN npm run build


FROM nginx:stable-alpine

COPY --from=build /app/build/ /usr/share/nginx/html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Push the changes to your repository to be picked up by the GH workflow we'll be setting up next&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Configure the GitHub Action secrets&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Go to your project repository, Go to &lt;code&gt;settings&lt;/code&gt; in the security section click the &lt;code&gt;Secrets and variables&lt;/code&gt; drop down and click &lt;code&gt;Actions&lt;/code&gt;. Configure the variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlyu9c4u1ou8gpqze8ar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlyu9c4u1ou8gpqze8ar.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; is your AWS user access key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; is your AWS user secret key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_REGION&lt;/code&gt; is the region of AWS services where you creating.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ROLE_ARN&lt;/code&gt; is your IAM role ARN which you have created before which you named &lt;code&gt;app-runner-service-role&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Configure the new workflow in GitHub Action&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Click &lt;code&gt;set up a workflow by yourself&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgna6eenrxx8a9n7muv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgna6eenrxx8a9n7muv8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, on the file editor(main.yml), populate the below YAML file&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;name: Deploy to App Runner - Image based # Name of the workflow&lt;br&gt;
on:&lt;br&gt;
  push:&lt;br&gt;
    branches: [ main ] # Trigger workflow on git push to main branch&lt;br&gt;
  workflow_dispatch: # Allow manual invocation of the workflow&lt;br&gt;
jobs:&lt;br&gt;&lt;br&gt;
  deploy:&lt;br&gt;
    runs-on: ubuntu-latest&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:      
  - name: Checkout
    uses: actions/checkout@v2
    with:
      persist-credentials: false

  - name: Configure AWS credentials
    id: aws-credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-region: ${{ secrets.AWS_REGION }}

  - name: Login to Amazon ECR
    id: login-ecr
    uses: aws-actions/amazon-ecr-login@v1        

  - name: Build, tag, and push image to Amazon ECR
    id: build-image
    env:
      ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
      ECR_REPOSITORY: privacy-protect
      IMAGE_TAG: ${{ github.sha }}
    run: |
      docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
      docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
      echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"  

  - name: Deploy to App Runner
    id: deploy-apprunner
    uses: awslabs/amazon-app-runner-deploy@main        
    with:
      service: privacy-protect-app-runner
      image: ${{ steps.build-image.outputs.image }}          
      access-role-arn: ${{ secrets.ROLE_ARN }}       
      region: ${{ secrets.AWS_REGION }}
      cpu : 1
      memory : 2
      port: 80
      wait-for-service-stability: true

  - name: App Runner ID
    run: echo "App runner ID ${{ steps.deploy-apprunner.outputs.service-id }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  ᛫ &lt;em&gt;&lt;strong&gt;GitHub Action runs successfully after a few minutes&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;You can see the AWS App Runner dashboard your application is up and running successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewr1yxlzr5gwmpi4qxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewr1yxlzr5gwmpi4qxe.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>awsapprunner</category>
      <category>githubactions</category>
      <category>docker</category>
    </item>
    <item>
      <title>How to Deploy a TimeVault Application on Cloud Run with Cloud Build on GCP</title>
      <dc:creator>Rajeshwar R</dc:creator>
      <pubDate>Tue, 23 May 2023 07:00:32 +0000</pubDate>
      <link>https://forem.com/ittrident/timevault-on-gcp-cloudrun-4d5k</link>
      <guid>https://forem.com/ittrident/timevault-on-gcp-cloudrun-4d5k</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;strong&gt;Timevault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A deadman's switch to encrypt your vulnerability reports or other compromising data to be decryptable at a set time in the future. Uses tlock-js and is powered by drand. Messages encrypted with timevault are also compatible with the go tlock library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloudrun&lt;/strong&gt;&lt;br&gt;
Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language on Cloud Run if you can build a container image from it. In fact, building container images is optional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure of Timevault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create a repository as in GCP Cloud Source Repositories name &lt;code&gt;timevault-cloudrun&lt;/code&gt;. And clone the repository to your local.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Clone the timevault repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git clone https://github.com/r4jeshwar/timevault.git
 cd  timevault
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Mirror the Github (timevault) repository to GCP cloud source repository&lt;/p&gt;

&lt;p&gt;Go the GCP cloud source repository service and click &lt;code&gt;Add repository&lt;/code&gt; and click &lt;code&gt;connect external repository&lt;/code&gt; next click &lt;code&gt;continue&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---foPceYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wz6sugz1i1h8658e72p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---foPceYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wz6sugz1i1h8658e72p3.png" alt="Image description" width="662" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your project ID and Select &lt;code&gt;Github&lt;/code&gt; in Git provider, next choose your needed repository and click &lt;code&gt;connect selected repository&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nCZSfAd_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/begprqseppf19iu8ien3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nCZSfAd_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/begprqseppf19iu8ien3.png" alt="Image description" width="657" height="922"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, Your Github repository will connect to your GCP cloud source repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kqhH0neG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5saa57iragp4nj1am2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kqhH0neG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5saa57iragp4nj1am2p.png" alt="Image description" width="449" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Using this Multi-stage Dockerfile we can able to deploy it on Cloudrun&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18-alpine AS build

WORKDIR /app

COPY package*.json ./
COPY ./src ./src

RUN npm i
RUN npm run build

FROM nginx:stable-alpine

COPY --from=build /app/dist/ /usr/share/nginx/html/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 5:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Write the cloudbuild.yaml to deploy it on Cloudrun. Here the &lt;code&gt;cloudbuild.yaml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; steps:
 # Build the container image
 - name: 'gcr.io/cloud-builders/docker'
   args: ['build', '-t', 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA', '.']
 # Push the container image to Container Registry
 - name: 'gcr.io/cloud-builders/docker'
   args: ['push', 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA']
 # Deploy container image to Cloud Run
 - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
   entrypoint: gcloud
   args:
   - 'run'
   - 'deploy'
   - '$_IMAGE_NAME'
   - '--platform=managed'
   - '--image'
   - 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA'
   - '--region'
   - '$_REGION'
   - '--allow-unauthenticated'
   - '--port'
   - '80'  
 images:
 - 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA'

 substitutions:
   _IMAGE_NAME: timevault
   _REGION: &amp;lt;YOUR_REGION&amp;gt;
   _PROJECT_ID: &amp;lt;YOUR_PROJECT_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;YOUR_REGION&lt;/code&gt; is the region of the Cloud Run service you are deploying.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;YOUR_PROJECT_ID&lt;/code&gt; is your Google Cloud project ID where your image is stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Go to Cloud Build and create a trigger.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fafzZsWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mt8ndfv6oz5x88xze9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fafzZsWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mt8ndfv6oz5x88xze9z.png" alt="Image description" width="792" height="921"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m8Zg8_dC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oatl3fsd96ws02ytljc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m8Zg8_dC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oatl3fsd96ws02ytljc2.png" alt="Image description" width="792" height="908"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 7:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Trigger the Cloud Build. By below steps:&lt;br&gt;
&lt;code&gt;Cloud Build&lt;/code&gt; --&amp;gt; &lt;code&gt;Triggers&lt;/code&gt; --&amp;gt; click &lt;code&gt;RUN&lt;/code&gt; --&amp;gt; click &lt;code&gt;RUN TRIGGER&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x9lX3V-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rktueuiibrrbjyeqwor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x9lX3V-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rktueuiibrrbjyeqwor.png" alt="Image description" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deployed on Cloud Run. Copy the Cloud Run URL and expose it on web browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eBXXAMVR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4j73tl7oasha9hgkd4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eBXXAMVR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4j73tl7oasha9hgkd4a.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adding custom domain in Cloud run&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you deploy a service to Cloud run, you are provided with a default  domain to access the service. However, you can use your own domain or subdomain instead of default one.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Go to Cloud run service, Click &lt;strong&gt;MANAGE CUSTOM DOMAINS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tdcagLpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d550vqfvkgx9u7i9331t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tdcagLpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d550vqfvkgx9u7i9331t.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Go to the domain mapping, Click &lt;strong&gt;ADD MAPPING&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7SsKM9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlkofi6gv4mxeyr5yua1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7SsKM9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlkofi6gv4mxeyr5yua1.png" alt="Image description" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the add mapping form, select the service which you map to and select the verify the new domain, next enter your domain. As shown in the below. Click the &lt;strong&gt;CONTINUE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LIIpiQ-l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zunw5epp2h6r95yln4lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LIIpiQ-l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zunw5epp2h6r95yln4lj.png" alt="Image description" width="728" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will take some time to verify your domain, Click &lt;strong&gt;REFRESH&lt;/strong&gt; to check verification is done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y_8GpxhF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sz9jyhuwi1d08x065aes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y_8GpxhF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sz9jyhuwi1d08x065aes.png" alt="Image description" width="607" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the verification is done, enter the subdomain to map to the service you have selected, then click &lt;strong&gt;CONTINUE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2lHtutmm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpgg09zw6tm5ckbjc4ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2lHtutmm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpgg09zw6tm5ckbjc4ti.png" alt="Image description" width="642" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will get the DNS record for your custom domain that maps to your Cloud run service. You need to add this DNS record to your domain provider&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8nrfDm9l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3hk1x0333odyognm5a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8nrfDm9l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3hk1x0333odyognm5a0.png" alt="Image description" width="641" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, when you add the DNS record successfully, you can use your custom domain to access your deployed Cloud run service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZgfEj3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hahzk0o7nzyfnlxd6dew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZgfEj3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hahzk0o7nzyfnlxd6dew.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
