<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shrinivas Vishnupurikar</title>
    <description>The latest articles on Forem by Shrinivas Vishnupurikar (@shrinivasv73).</description>
    <link>https://forem.com/shrinivasv73</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shrinivasv73"/>
    <language>en</language>
    <item>
      <title>Schema, COPY, MERGE, and Immutability — A First-Principles Guide for Data Engineers</title>
      <dc:creator>Shrinivas Vishnupurikar</dc:creator>
      <pubDate>Mon, 05 Jan 2026 03:08:29 +0000</pubDate>
      <link>https://forem.com/shrinivasv73/schema-copy-merge-and-immutability-a-first-principles-guide-for-data-engineers-3b98</link>
      <guid>https://forem.com/shrinivasv73/schema-copy-merge-and-immutability-a-first-principles-guide-for-data-engineers-3b98</guid>
      <description>&lt;p&gt;In modern data engineering conversations, terms like schema-on-read, schema-on-write, COPY, MERGE, and immutable partitions are used very often.&lt;/p&gt;

&lt;p&gt;When I started out my career in Data Engineering, I always heard my senior engineers ( mentors ) mention these when they were designing and architecting pipelines and data deliver platforms in general. Back then, for me these were merely some combinations of words, but seniors this was the matter of design choices that could make or break project deliverables.&lt;/p&gt;

&lt;p&gt;These are isolated terms that are deeply connected and which result in data processing patterns and data management techniques.&lt;/p&gt;

&lt;p&gt;You could start by understanding them in isolation, but understanding them together builds a strong mental model that applies across systems, formats, and platforms.&lt;/p&gt;

&lt;p&gt;This article explains these ideas from first principles, then connects them to open table formats, with a strong focus on why these patterns exist.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. What Is a “Schema” — First Principles
&lt;/h2&gt;

&lt;p&gt;At its core, a schema is a contract or an agreement; a word-of-mouth like deal between you and the data.&lt;/p&gt;

&lt;p&gt;A schema typically defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What fields exist&lt;/li&gt;
&lt;li&gt;What each field represents&lt;/li&gt;
&lt;li&gt;What type of data each field can hold&lt;/li&gt;
&lt;li&gt;Whether fields are optional or mandatory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most important question is not what the schema is, but in our context for today’s article it’s when the system enforces it.&lt;/p&gt;

&lt;p&gt;That single question gives birth to two fundamental models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema-on-Write&lt;/li&gt;
&lt;li&gt;Schema-on-Read&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1.1 Schema-on-Write (Validate First, Store Later)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Definition
&lt;/h4&gt;

&lt;p&gt;Schema-on-write enforces structure before data is stored, in your target systems. If incoming data does not match the expected structure, it is rejected.&lt;/p&gt;

&lt;p&gt;Think of schema-on-write like a strict security checkpoint at a college entrance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The college has a predefined rule: only students with a valid college ID are allowed inside.&lt;/li&gt;
&lt;li&gt;The security guard checks the ID before allowing entry.&lt;/li&gt;
&lt;li&gt;If you do not have the ID, you are stopped at the gate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this analogy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The college premises represent the target data system&lt;/li&gt;
&lt;li&gt;The college ID card represents the schema&lt;/li&gt;
&lt;li&gt;The security check represents schema validation&lt;/li&gt;
&lt;li&gt;Entry is allowed only if the contract is satisfied&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Intuition
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Only clean, trusted data is allowed inside.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Story Example
&lt;/h4&gt;

&lt;p&gt;Consider a bank account system.&lt;/p&gt;

&lt;p&gt;Before saving a record:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Account number must exist&lt;/li&gt;
&lt;li&gt;Balance must be numeric&lt;/li&gt;
&lt;li&gt;Date fields must be valid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If anything is incorrect, the record is refused. Bad data never enters the system.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Pattern Exists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Data correctness is critical&lt;/li&gt;
&lt;li&gt;Downstream systems assume reliability&lt;/li&gt;
&lt;li&gt;Errors must be detected early&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Trade-off
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Lower flexibility&lt;/li&gt;
&lt;li&gt;Slower ingestion&lt;/li&gt;
&lt;li&gt;Schema changes require coordination&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1.2 Schema-on-Read (Store First, Decide Later)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Definition
&lt;/h4&gt;

&lt;p&gt;Schema-on-read stores data first and applies structure only when the data is read.&lt;/p&gt;

&lt;p&gt;Think of schema-on-read like entering a public library or a large campus.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is no strict identity check at the gate&lt;/li&gt;
&lt;li&gt;Anyone can enter—students, visitors, researchers&lt;/li&gt;
&lt;li&gt;The system does not ask who you are or what you will do upfront&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rules are applied only when you access something specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certain rooms may require permission&lt;/li&gt;
&lt;li&gt;Certain books may have usage rules&lt;/li&gt;
&lt;li&gt;Some resources may only be available to specific people&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this analogy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The campus or library represents the storage system&lt;/li&gt;
&lt;li&gt;Entering freely represents storing raw data without validation&lt;/li&gt;
&lt;li&gt;Rules applied later represent schema being enforced at read time&lt;/li&gt;
&lt;li&gt;Meaning is decided when you access the data, not when it arrives&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Story Example
&lt;/h4&gt;

&lt;p&gt;A mobile app analytics system collects user events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different app versions send different fields&lt;/li&gt;
&lt;li&gt;Some events are incomplete&lt;/li&gt;
&lt;li&gt;New fields appear frequently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rejecting such data would cause data loss. Instead, everything is stored, and meaning is applied during analysis.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Pattern Exists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;High data variability&lt;/li&gt;
&lt;li&gt;Rapid change&lt;/li&gt;
&lt;li&gt;Exploration and discovery use cases&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Trade-off
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Data quality issues surface later&lt;/li&gt;
&lt;li&gt;Queries must handle inconsistencies&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. COPY vs MERGE — What Problem Do They Solve?
&lt;/h2&gt;

&lt;p&gt;Schemas define structure. COPY and MERGE define how new data interacts with existing data.&lt;/p&gt;

&lt;p&gt;The key question here is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Does incoming data represent new facts or changes to existing facts?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2.1 COPY (Append-Only Thinking)
&lt;/h3&gt;

&lt;p&gt;Definition&lt;br&gt;
COPY means inserting incoming data as new rows without checking for existing records.&lt;/p&gt;

&lt;h4&gt;
  
  
  Intuition
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Oh, new data?&lt;br&gt;
Cool. Send it in.&lt;/p&gt;

&lt;p&gt;Do I already have something similar?&lt;br&gt;
I don’t care.&lt;/p&gt;

&lt;p&gt;Is this a duplicate?&lt;br&gt;
Not my problem.&lt;/p&gt;

&lt;p&gt;I’ll just add whatever comes in as a new data point and move on.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Story Example
&lt;/h4&gt;

&lt;p&gt;A daily sales report:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yesterday’s sales never change&lt;/li&gt;
&lt;li&gt;Today’s sales are new facts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each day’s data is simply appended.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Pattern Exists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity&lt;/li&gt;
&lt;li&gt;High performance&lt;/li&gt;
&lt;li&gt;Historical accuracy&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Trade-off
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Corrections require reprocessing&lt;/li&gt;
&lt;li&gt;Duplicates are possible&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2.2 MERGE (State-Aware Thinking)
&lt;/h3&gt;

&lt;p&gt;Definition&lt;/p&gt;

&lt;p&gt;MERGE updates existing records when matches are found and inserts new records when they are not.&lt;/p&gt;

&lt;h4&gt;
  
  
  Intuition
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;“Alright, new data is coming in.&lt;/p&gt;

&lt;p&gt;David, if you find something we already have, update it with the latest details.&lt;/p&gt;

&lt;p&gt;Carlos, if you don’t recognize the record at all, add it as a brand-new entry.&lt;/p&gt;

&lt;p&gt;Bottom line — keep the data up to date.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Story Example
&lt;/h4&gt;

&lt;p&gt;A customer profile system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email remains the same&lt;/li&gt;
&lt;li&gt;Address and phone number change&lt;/li&gt;
&lt;li&gt;Status evolves over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system must reflect the current state, not a trail of outdated versions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Pattern Exists
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Data represents entities, not events&lt;/li&gt;
&lt;li&gt;Corrections are expected&lt;/li&gt;
&lt;li&gt;Idempotency matters&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Trade-off
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;More complex logic&lt;/li&gt;
&lt;li&gt;Higher compute cost&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3. All Valid Combinations — Schema × Write Pattern
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3.1 Schema-on-Write + COPY
&lt;/h3&gt;

&lt;p&gt;Strict structure, immutable history&lt;/p&gt;

&lt;p&gt;Story:&lt;br&gt;
A financial ledger where every transaction must be valid and never changes.&lt;/p&gt;

&lt;p&gt;Why it fits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data correctness is enforced&lt;/li&gt;
&lt;li&gt;History is preserved forever&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.2 Schema-on-Write + MERGE
&lt;/h3&gt;

&lt;p&gt;Strict structure, evolving state&lt;/p&gt;

&lt;p&gt;Story:&lt;br&gt;
A customer master table with well-defined fields, where customer details are updated over time.&lt;/p&gt;

&lt;p&gt;Why it fits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strong data guarantees&lt;/li&gt;
&lt;li&gt;Accurate current view&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.3 Schema-on-Read + COPY
&lt;/h3&gt;

&lt;p&gt;Flexible structure, raw history&lt;/p&gt;

&lt;p&gt;Story:&lt;br&gt;
An application log store capturing all events, even malformed ones.&lt;/p&gt;

&lt;p&gt;Why it fits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero data loss&lt;/li&gt;
&lt;li&gt;Future reprocessing is possible&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.4 Schema-on-Read + MERGE
&lt;/h3&gt;

&lt;p&gt;Flexible structure, improving state&lt;/p&gt;

&lt;p&gt;Story:&lt;br&gt;
A data enrichment system where records arrive incomplete and get enriched later.&lt;/p&gt;

&lt;p&gt;Why it fits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows partial data&lt;/li&gt;
&lt;li&gt;Data quality improves over time&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Where Partition Immutability Enters the Picture
&lt;/h2&gt;

&lt;p&gt;Modern open table formats rely on immutable data files (partitions).&lt;/p&gt;

&lt;p&gt;Immutability means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once written, data files are never modified&lt;/li&gt;
&lt;li&gt;Updates and deletes create new files&lt;/li&gt;
&lt;li&gt;Old files are logically retired via metadata ( often with property like status = DELETED or valid_to =  )&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design is essential to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safe concurrent reads and writes&lt;/li&gt;
&lt;li&gt;Reliable versioning&lt;/li&gt;
&lt;li&gt;Efficient rollback and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Immutability in Open Table Formats
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Apache Iceberg (Primary Focus)
&lt;/h3&gt;

&lt;p&gt;Design philosophy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strong immutability&lt;/li&gt;
&lt;li&gt;Snapshot-based metadata&lt;/li&gt;
&lt;li&gt;Schema evolution as a first-class feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How patterns map:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;COPY → new data files added to a snapshot&lt;/li&gt;
&lt;li&gt;MERGE → rewritten files + new snapshot&lt;/li&gt;
&lt;li&gt;Schema-on-read and schema-on-write both supported via metadata evolution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Iceberg treats metadata as the control plane and data files as immutable assets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Delta Lake (Secondary)
&lt;/h3&gt;

&lt;p&gt;Design philosophy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transaction log driven&lt;/li&gt;
&lt;li&gt;Immutable parquet files&lt;/li&gt;
&lt;li&gt;Strong support for MERGE&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How patterns map:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;COPY → append entries in the transaction log&lt;/li&gt;
&lt;li&gt;MERGE → file rewrites tracked in the log&lt;/li&gt;
&lt;li&gt;Schema enforcement optional but commonly used&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Delta emphasizes transactional reliability with strong ACID guarantees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Apache Hudi (Secondary)
&lt;/h3&gt;

&lt;p&gt;Design philosophy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designed for mutable datasets&lt;/li&gt;
&lt;li&gt;Supports record-level updates&lt;/li&gt;
&lt;li&gt;Multiple storage strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How patterns map:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;COPY → insert-only tables&lt;/li&gt;
&lt;li&gt;MERGE → copy-on-write or merge-on-read&lt;/li&gt;
&lt;li&gt;Schema evolution supported but more operationally involved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hudi optimizes for near-real-time updates and streaming workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Closing Note
&lt;/h2&gt;

&lt;p&gt;Schema enforcement, write patterns, and immutability are not independent ideas.&lt;br&gt;
They are three dimensions of the same design space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema answers when data is trusted&lt;/li&gt;
&lt;li&gt;COPY vs MERGE answers how data evolves&lt;/li&gt;
&lt;li&gt;Immutability answers how systems stay correct at scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once these are understood together, modern data systems stop feeling complex and start feeling inevitable.&lt;/p&gt;

&lt;p&gt;This understanding travels with you—across tools, platforms, and technologies.&lt;/p&gt;

</description>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Snowflake + Postgres: A Small Feature That Signals a Big Shift</title>
      <dc:creator>Shrinivas Vishnupurikar</dc:creator>
      <pubDate>Fri, 26 Dec 2025 06:34:17 +0000</pubDate>
      <link>https://forem.com/shrinivasv73/snowflake-postgres-a-small-feature-that-signals-a-big-shift-43f6</link>
      <guid>https://forem.com/shrinivasv73/snowflake-postgres-a-small-feature-that-signals-a-big-shift-43f6</guid>
      <description>&lt;h2&gt;
  
  
  The Story Every Data Engineer Wonders About
&lt;/h2&gt;

&lt;p&gt;When people talk about data engineering, the explanation is usually simple.&lt;/p&gt;

&lt;p&gt;A data engineer moves data from one system to another.&lt;/p&gt;

&lt;p&gt;In most real-world setups, this means moving data from an &lt;strong&gt;OLTP system&lt;/strong&gt;, where transactions are written continuously, into an &lt;strong&gt;OLAP system&lt;/strong&gt;, which is optimized for analytics, reporting, and business insights. This explanation is usually enough for anyone new to the field.&lt;/p&gt;

&lt;p&gt;But over time, many data engineers begin to feel that something about this setup is slightly off.&lt;/p&gt;

&lt;p&gt;A significant amount of effort goes into building, maintaining, monitoring, and debugging pipelines whose only purpose is to move data from one place to another. Not to transform it in a meaningful way. Not to enrich it. Simply to keep two worlds in sync.&lt;/p&gt;

&lt;p&gt;Eventually, a quiet question starts to surface:&lt;/p&gt;

&lt;p&gt;Why do transactional data and analytical data still need to live in completely separate systems?&lt;/p&gt;




&lt;h2&gt;
  
  
  Acronyms Used in This Blog
&lt;/h2&gt;

&lt;p&gt;To keep things clear, here are the acronyms used throughout this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OLTP (Online Transaction Processing):&lt;/strong&gt; Systems optimized for fast inserts, updates, and deletes
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OLAP (Online Analytical Processing):&lt;/strong&gt; Systems optimized for large-scale reads and analytics
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDC (Change Data Capture):&lt;/strong&gt; Techniques used to track and replicate data changes
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI (Artificial Intelligence):&lt;/strong&gt; Systems that learn from data to make predictions or decisions
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Elephant in the Room
&lt;/h2&gt;

&lt;p&gt;This question becomes even more fitting in the context of Postgres.&lt;/p&gt;

&lt;p&gt;Postgres literally uses an elephant as its logo, and for years, the separation between Postgres and analytics platforms has been the elephant in the room of modern data architectures.&lt;/p&gt;

&lt;p&gt;We all understand the technical reasons behind the split.&lt;/p&gt;

&lt;p&gt;OLTP systems are designed for correctness, concurrency, and fast writes.&lt;br&gt;&lt;br&gt;
OLAP systems are designed for large scans, aggregations, and analytical workloads.&lt;/p&gt;

&lt;p&gt;Still, the friction remains.&lt;/p&gt;

&lt;p&gt;Snowflake Postgres may appear to be a small announcement, but it quietly acknowledges this long-standing tension instead of ignoring it.&lt;/p&gt;

&lt;p&gt;Before answering what this changes, it helps to be clear about what Snowflake Postgres actually is, and just as importantly, what it is not.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Exactly Is Snowflake Postgres?
&lt;/h2&gt;

&lt;p&gt;Snowflake Postgres is a fully managed &lt;strong&gt;PostgreSQL&lt;/strong&gt; service provisioned directly from a Snowflake account.&lt;/p&gt;

&lt;p&gt;In practical terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is a fully compatible PostgreSQL database
&lt;/li&gt;
&lt;li&gt;Existing Postgres clients and drivers work without change
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snowflake manages scaling, availability, security, and governance&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a developer’s point of view, everything feels familiar.&lt;br&gt;&lt;br&gt;
From an operational point of view, a large amount of infrastructure responsibility quietly disappears.&lt;/p&gt;

&lt;p&gt;There is no need to separately manage high availability, failover strategies, security patching, or governance layers. Snowflake takes ownership of these concerns in the same way it already does for analytical workloads.&lt;/p&gt;

&lt;p&gt;This offering builds on Snowflake’s 2025 acquisition of &lt;strong&gt;Crunchy Data&lt;/strong&gt;, a company known for running PostgreSQL reliably at enterprise scale.&lt;/p&gt;

&lt;p&gt;Official announcement:&lt;br&gt;
&lt;a href="https://www.ctol.digital/news/snowflake-acquires-crunchy-data-250m-ai-database-capabilities/" rel="noopener noreferrer"&gt;Snowflake acquires Crunchy Data 250m AI Database Capabilities&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That acquisition now feels less like an isolated move and more like a foundational step toward something larger.&lt;/p&gt;

&lt;p&gt;Snowflake is not replacing Postgres.&lt;br&gt;&lt;br&gt;
It is not turning itself into a traditional OLTP platform.&lt;/p&gt;

&lt;p&gt;Instead, it is bringing Postgres into the same control plane that already supports analytics and AI workloads. That distinction is subtle, but important.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Was Needed
&lt;/h2&gt;

&lt;p&gt;For many years, most data architectures followed a sensible and widely accepted separation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL (or similar systems) for application and transactional workloads
&lt;/li&gt;
&lt;li&gt;Snowflake for analytics, reporting, and business intelligence
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each system did what it was best at, and for a long time, this model worked well.&lt;/p&gt;

&lt;p&gt;However, as data volumes grew and expectations shifted toward fresher insights, real-time decision-making, and AI-driven use cases, the cost of this separation became harder to ignore.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Day-to-Day Reality
&lt;/h3&gt;

&lt;p&gt;In practice, this separation often meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous replication of data from Postgres into Snowflake
&lt;/li&gt;
&lt;li&gt;Lag between production events and analytical visibility
&lt;/li&gt;
&lt;li&gt;Multiple security and governance models to configure and audit
&lt;/li&gt;
&lt;li&gt;More systems to monitor, scale, and troubleshoot
&lt;/li&gt;
&lt;li&gt;Extra logic written solely to keep data reasonably fresh
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this was poor engineering.&lt;/p&gt;

&lt;p&gt;It was simply the best architecture available when transactional and analytical systems lived on different platforms and were owned by different operational models.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Snowflake Postgres Changes
&lt;/h2&gt;

&lt;p&gt;Snowflake Postgres shortens the distance between two critical points in a data system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;where data is created
&lt;/li&gt;
&lt;li&gt;where data is analyzed or used by AI
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In traditional setups, these points are separated by layers of CDC tools, pipelines, schedulers, and orchestration logic. Each layer introduces latency, operational risk, and cognitive overhead.&lt;/p&gt;

&lt;p&gt;Snowflake Postgres reduces that distance.&lt;/p&gt;

&lt;p&gt;This shift is not primarily about raw performance.&lt;br&gt;&lt;br&gt;
It is about simplifying the system as a whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Impact
&lt;/h3&gt;

&lt;p&gt;At a practical level, this leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced reliance on constant replication of transactional data
&lt;/li&gt;
&lt;li&gt;Fewer pipelines, which directly means fewer failure points
&lt;/li&gt;
&lt;li&gt;A single governance and security model across workloads
&lt;/li&gt;
&lt;li&gt;Easier access to fresher operational data for analytics and AI
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When combined with open table formats such as Apache Iceberg and AI capabilities like Snowflake Cortex, this approach starts to resemble a unified data foundation rather than a collection of loosely connected systems.&lt;/p&gt;

&lt;p&gt;Snowflake’s &lt;strong&gt;PG_LAKE&lt;/strong&gt; initiative reinforces this direction by exploring deeper integration between Postgres and the lakehouse model:&lt;br&gt;
&lt;a href="https://www.snowflake.com/en/engineering-blog/pg-lake-postgres-lakehouse-integration/" rel="noopener noreferrer"&gt;PG Lake Postgres Lakehouse Integration&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for Data Engineers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cleaner Pipelines and Lower Operational Overhead
&lt;/h3&gt;

&lt;p&gt;Traditional architectures rely on CDC tools, ETL or ELT pipelines, and orchestrators to keep Postgres and Snowflake in sync. Each layer is necessary, but together they add operational weight.&lt;/p&gt;

&lt;p&gt;With Snowflake Postgres, operational data and analytical workloads share the same platform. This reduces the number of moving parts and allows data engineers to spend more time on modeling, optimization, and business use cases rather than pipeline maintenance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fewer Sync Issues and More Trust in Data
&lt;/h3&gt;

&lt;p&gt;Separated systems commonly introduce late-arriving data, schema drift, and partial updates after failures.&lt;/p&gt;

&lt;p&gt;With Postgres integrated inside Snowflake, governance and security remain consistent, duplication is reduced, and analytical outputs more accurately reflect operational reality. This directly improves trust in dashboards, reports, and downstream decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Faster Analytics and AI Experimentation
&lt;/h3&gt;

&lt;p&gt;Modern use cases such as personalization, fraud detection, and near real-time analytics depend on fresh operational data.&lt;/p&gt;

&lt;p&gt;Snowflake Postgres narrows the gap between production data and analytical access, making it easier to experiment and iterate without redesigning data movement pipelines for every new idea.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unified Security and Governance
&lt;/h3&gt;

&lt;p&gt;Managing access controls, auditing, and compliance across platforms is expensive and error-prone.&lt;/p&gt;

&lt;p&gt;With Snowflake Postgres:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication is centralized
&lt;/li&gt;
&lt;li&gt;Permission models are unified
&lt;/li&gt;
&lt;li&gt;Auditing and lineage follow a consistent approach
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This simplifies compliance with standards such as SOC 2 and GDPR while reducing operational burden.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Context: Why the Split Existed
&lt;/h2&gt;

&lt;p&gt;Many modern companies have followed the Postgres-for-OLTP and Snowflake-for-OLAP pattern for good reasons. That split enabled scale, protected production systems, and unlocked advanced analytics.&lt;/p&gt;

&lt;p&gt;Snowflake Postgres does not invalidate these architectures.&lt;/p&gt;

&lt;p&gt;Instead, it offers an alternative when tighter integration between operational data, analytics, and AI becomes valuable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Trade-Offs (Because There Are Always Trade-Offs)
&lt;/h2&gt;

&lt;p&gt;Snowflake Postgres is not a universal solution.&lt;/p&gt;

&lt;p&gt;It does not mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;every OLTP workload should move into Snowflake
&lt;/li&gt;
&lt;li&gt;traditional Postgres deployments will disappear
&lt;/li&gt;
&lt;li&gt;existing architectures need immediate rewrites
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For applications with strict latency requirements or deeply embedded infrastructure, standalone Postgres will continue to make sense.&lt;/p&gt;

&lt;p&gt;What Snowflake Postgres offers is choice.&lt;/p&gt;

&lt;p&gt;For data-heavy products where operational and analytical workloads are tightly linked, teams can now design systems with fewer boundaries and fewer moving parts.&lt;/p&gt;

&lt;p&gt;Good architecture is not about trends.&lt;br&gt;&lt;br&gt;
It is about choosing the setup that introduces the least friction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is a No-Brainer For
&lt;/h2&gt;

&lt;p&gt;Snowflake Postgres is especially compelling for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New products already planning to use Postgres for OLTP workloads
&lt;/li&gt;
&lt;li&gt;Teams building data-driven or AI-heavy applications from day one
&lt;/li&gt;
&lt;li&gt;Organizations that want strong governance without stitching together multiple platforms
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For greenfield systems, starting with Postgres inside Snowflake can remove entire classes of future complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Sensible Migration Path for Existing Systems
&lt;/h2&gt;

&lt;p&gt;For teams considering this architecture, the safest approach is gradual and low-risk:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with &lt;strong&gt;development environments&lt;/strong&gt; to validate tooling and behavior
&lt;/li&gt;
&lt;li&gt;Extend to &lt;strong&gt;QA or staging&lt;/strong&gt; for performance and governance testing
&lt;/li&gt;
&lt;li&gt;Gradually introduce &lt;strong&gt;production workloads&lt;/strong&gt;, starting with non-critical services
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This path allows teams to evaluate real-world benefits without forcing large, disruptive migrations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Snowflake Postgres may look like a small checkbox feature today.&lt;/p&gt;

&lt;p&gt;But in hindsight, it could mark the point where analytics platforms stopped being passive data stores and started becoming active system backbones.&lt;/p&gt;

&lt;p&gt;Not by replacing Postgres, but by finally removing the wall between transactions, analytics, and AI.&lt;/p&gt;

&lt;p&gt;And that is a shift worth paying attention to.&lt;/p&gt;

</description>
      <category>snowflake</category>
      <category>postgres</category>
      <category>database</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>What is a Build Process in React ( or in any framework for that matter? )</title>
      <dc:creator>Shrinivas Vishnupurikar</dc:creator>
      <pubDate>Thu, 08 Aug 2024 11:51:51 +0000</pubDate>
      <link>https://forem.com/shrinivasv73/what-is-a-build-process-in-react-or-in-any-framework-for-that-matter--4df5</link>
      <guid>https://forem.com/shrinivasv73/what-is-a-build-process-in-react-or-in-any-framework-for-that-matter--4df5</guid>
      <description>&lt;h2&gt;
  
  
  [ Technology ]: ReactJS – Article #1
&lt;/h2&gt;




&lt;blockquote&gt;
&lt;p&gt;The Frameworks simplify the Development for Engineers and this is my attempt to simply the Behind-The-Scenes functioning of the ReactJS.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Story time
&lt;/h2&gt;

&lt;p&gt;I've started out with ReactJS. Yup, I really have. It's like a dream being delay by 2 years when I was passionate about UI / UX Designing and Front-End Development before I dived into Data Science. ( I still am as passionate as I was 2 years ago.)&lt;/p&gt;

&lt;p&gt;I'm now an Intern at a company ( that call themselves a startup because, its culture is more of a start-up's than that of company's ) and today, on my first day, I literally had nothing to do since, my TL ( Team Lead ) was not coming to the office as he was occupied with some meeting.&lt;/p&gt;

&lt;p&gt;Did I let he time slip off my hands. Absolutely not.&lt;br&gt;
The probability of me getting a task / project to put my Data analytics skills to test was fairly low. Hence I resorted to get my hands dirty in development. I could sense that this might be the best time to started with ReactJS.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is React?
&lt;/h2&gt;

&lt;p&gt;React is a verb ( pun intended ). But in the context of development technologies, &lt;strong&gt;"The library for web and native user interfaces"&lt;/strong&gt;, claims the &lt;a href="https://react.dev/" rel="noopener noreferrer"&gt;official website&lt;/a&gt; of ReactJS.&lt;/p&gt;

&lt;p&gt;Now if you've been around the development ecosystem, you must have heard about other 2 competitors or rather the siblings of ReactJS, which are &lt;a href="https://angular.io/" rel="noopener noreferrer"&gt;Angular&lt;/a&gt; and &lt;a href="https://vuejs.org/" rel="noopener noreferrer"&gt;VueJS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here's a short comparison of the 3 of the most popular Front-End technologies.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Core Concept&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Library focused on UI&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Full-fledged framework&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Progressive framework&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data Binding&lt;/td&gt;
&lt;td&gt;One-way data flow&lt;/td&gt;
&lt;td&gt;Two-way data binding&lt;/td&gt;
&lt;td&gt;Two-way data binding (optional)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Component Structure&lt;/td&gt;
&lt;td&gt;Custom components&lt;/td&gt;
&lt;td&gt;Directives and components&lt;/td&gt;
&lt;td&gt;Components&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning Curve&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Steep&lt;/td&gt;
&lt;td&gt;Gentle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;High (Virtual DOM)&lt;/td&gt;
&lt;td&gt;Can be slower due to two-way data binding&lt;/td&gt;
&lt;td&gt;High (Optimized rendering)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Excellent, suitable for large-scale apps&lt;/td&gt;
&lt;td&gt;Strong support for large-scale enterprise apps&lt;/td&gt;
&lt;td&gt;Good scalability, but might require additional libraries for complex projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community and Ecosystem&lt;/td&gt;
&lt;td&gt;Largest community, rich ecosystem&lt;/td&gt;
&lt;td&gt;Large community, strong ecosystem&lt;/td&gt;
&lt;td&gt;Growing community, good ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flexibility&lt;/td&gt;
&lt;td&gt;High, can be used with other libraries/frameworks&lt;/td&gt;
&lt;td&gt;Less flexible due to rigid structure&lt;/td&gt;
&lt;td&gt;Flexible, can be used incrementally&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Why should we use ReactJS when we have plain HTML and JS?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Demerits of Plain HTML and JS.
&lt;/h3&gt;

&lt;p&gt;Following are the problem you're going to face if you use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Difficulty Maintaining Large Applications:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Plain HTML and JS lacks a structured approach to organizing code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complex applications can lead to a tangled mess of logic and UI manipulation within event listeners and script files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This makes it challenging to understand, modify, and debug code as the application grows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Inefficient DOM Manipulation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Directly manipulating the DOM in JS can be inefficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every state change might trigger a complete re-rendering of the HTML structure, even for minor UI updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This can lead to performance bottlenecks as the application complexity increases.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Limited Reusability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Building reusable UI components with plain HTML and JS can be cumbersome.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You might end up copying and pasting code snippets across different parts of your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This makes it difficult to maintain consistency and implement changes efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Complex State Management:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managing the state of an application (data that controls UI behavior) becomes difficult with plain HTML and JS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keeping track of data changes and their corresponding UI updates can become messy and error-prone, especially for complex data flows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How ReactJS comes to the Rescue.
&lt;/h2&gt;

&lt;p&gt;ReactJS addresses these limitations by offering a component-based architecture, virtual DOM for efficient updates, and a rich ecosystem for managing complex UIs and application state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved Maintainability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React's component-based architecture and declarative approach lead to cleaner and more maintainable codebases, especially for large-scale applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Performance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The virtual DOM and efficient rendering mechanisms in React contribute to smoother and faster user experiences, even for complex web applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Code Reusability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React's component model promotes code reusability, allowing you to build modular UI components that can be easily shared and combined across your application.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Difference makes the Difference.
&lt;/h2&gt;

&lt;p&gt;When I created my first components, I asked myself, "What exactly" is this? Is it HTML or JS?&lt;/p&gt;

&lt;p&gt;I've embedded JS into the HTML via &lt;code&gt;&amp;lt;script&amp;gt; &amp;lt;/script&amp;gt;&lt;/code&gt; element or &lt;code&gt;&amp;lt;script src="index.js"&amp;gt; &amp;lt;/script&amp;gt;&lt;/code&gt; element. But writing HTML inside a JS file feels weird or rather different.&lt;/p&gt;

&lt;p&gt;I tried and writing the HTML inside the JS file of a non-React project and guess what it didn't go well.&lt;/p&gt;

&lt;p&gt;Then I learned that this special syntax ( HTML like inside of JS file ) is called as JSX JavaScript XML and is &lt;strong&gt;an extension of JavaScript.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the code that browsers understand is ultimately the plain HTML and JS, it means that there are some operations performed on the JSX ( syntactic sugar for building complex applications with ease ) that we write.&lt;/p&gt;

&lt;p&gt;This Behind The Scenes Operations itself is called a Build Process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The high-level idea of a build process is to transform your development code into an optimized version ready for deployment in a production environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While specific tools and configurations may vary depending on the technology stack, the general concepts and goals of the build process apply universally across frontend web development.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a Build Process in ReactJS?
&lt;/h2&gt;

&lt;p&gt;We've learnt that the high-level idea remain the same but the several phases in the Build Process of React are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bundling:

&lt;ul&gt;
&lt;li&gt;Imagine your React application consists of numerous JavaScript files, CSS stylesheets, and potentially image assets.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A bundler like Webpack takes all these separate files and combines them into a smaller number of optimized bundles.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Transpilation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transpilation involves converting this modern code (JSX) into plain JavaScript (ES5 or a compatible version) that can run on a wider range of browsers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Minification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minification shrinks the file size of your bundled code by removing unnecessary characters like whitespace, comments, and long variable/function names.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optimization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The build process might involve additional optimizations like tree-shaking, which removes unused code from your final bundles.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Production Mode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The development mode offers features like source maps (for easier debugging) and detailed error messages.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In contrast, the production mode focuses on optimization by enabling minification, tree-shaking, and other performance enhancements.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;code&gt;react-scripts&lt;/code&gt;: The Wolf of React Project
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;react-scripts&lt;/code&gt; is an internal package used by Create React App (CRA) to handle the behind-the-scenes functionalities in a React project.&lt;/p&gt;

&lt;p&gt;It's not directly interacted with by developers most of the time, but it's essential for development efficiency.&lt;/p&gt;

&lt;p&gt;Here's what react-scripts is responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bundling and Transpilation&lt;/li&gt;
&lt;li&gt;Development Server and Hot Reloading&lt;/li&gt;
&lt;li&gt;Testing&lt;/li&gt;
&lt;li&gt;Building for Production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 3 of the most significant tasks the react-scripts does is as follows which we'll understand in much detail:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Bundling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Imagine your React application consists of numerous JavaScript files, CSS stylesheets, and potentially image assets.&lt;/li&gt;
&lt;li&gt;A bundler like Webpack takes all these separate files and combines them into a smaller number of optimized bundles.&lt;/li&gt;
&lt;li&gt;This reduces the number of HTTP requests a browser needs to make, improving website loading speed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Transpilation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modern JavaScript features like JSX syntax used in React might not be understood by older browsers.&lt;/li&gt;
&lt;li&gt;Transpilation involves converting this modern code into plain JavaScript (ES5 or a compatible version) that can run on a wider range of browsers.&lt;/li&gt;
&lt;li&gt;Tools like Babel are commonly used for transpilation in React.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Minification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minification, also known as minimization, is a technique applied to code to reduce its file size without affecting its functionality.&lt;/li&gt;
&lt;li&gt;This is particularly beneficial for React applications deployed to production, as smaller file sizes translate to faster loading times for web pages.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's how minification works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Removing Unnecessary Characters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minifiers eliminate whitespace characters like spaces, newlines, and tabs from the code. This might seem insignificant for small files, but in large React projects, it can lead to a noticeable reduction in size.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Shortening Variable and Function Names:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minifiers often replace descriptive variable and function names with shorter, single-letter names. While this makes the code less readable for humans, it significantly reduces file size.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Removing Comments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comments are essential for documenting and understanding code during development. However, in production, they're not required for the code to function. Minifiers typically remove comments to further minimize file size.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is the learning of only 1 day summarized.&lt;/p&gt;

&lt;p&gt;I would have been able to build much more than this, if I utilized GenAI tools. I did GenAI tools either way but for learning purpose.&lt;/p&gt;

&lt;p&gt;I believe asking this right set of questions and then understanding the concepts in true depth will set you apart from the one's who automate the development.&lt;/p&gt;

&lt;p&gt;In the interviews it is the understanding and clarity of the concept that is sought, rather than your coding speed because, either way it's going to be automated to an extent.&lt;/p&gt;

&lt;p&gt;Thus the only X-Factor of you being a great software engineering lies in your knowledge to at least validate and verify whether the outcome of GenAI model caters to your tech needs or not.&lt;/p&gt;

&lt;p&gt;If you think that my content is valuable or have any feedback,&lt;br&gt;
do let me by reaching out to my following social media handles that you'll discover in my profile and the follows:&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/shrinivasv73/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shrinivasv73/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Twitter (X): &lt;a href="https://twitter.com/shrinivasv73" rel="noopener noreferrer"&gt;https://twitter.com/shrinivasv73&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instagram: &lt;a href="https://www.instagram.com/shrinivasv73/" rel="noopener noreferrer"&gt;https://www.instagram.com/shrinivasv73/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Email: &lt;a href="mailto:shrinivasv73@gmail.com"&gt;shrinivasv73@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🫡 This is Shrinivas, Signing Off!&lt;/p&gt;




</description>
      <category>javascript</category>
      <category>react</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to deploy ReactJS Apps (built using Vite) on GitHub Pages?</title>
      <dc:creator>Shrinivas Vishnupurikar</dc:creator>
      <pubDate>Thu, 08 Aug 2024 10:01:09 +0000</pubDate>
      <link>https://forem.com/shrinivasv73/how-to-deploy-reactjs-apps-built-using-vite-on-github-pages-38bi</link>
      <guid>https://forem.com/shrinivasv73/how-to-deploy-reactjs-apps-built-using-vite-on-github-pages-38bi</guid>
      <description>&lt;h2&gt;
  
  
  [ Technology ]: ReactJS – Article #2
&lt;/h2&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Differences in build tools lead to hurdles, especially in deployment. Today we'll spotlight one such and solve it for you.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You gather requirements, you design, you develop and you test. Great! Now you'll barely move to deployment.&lt;/p&gt;

&lt;p&gt;Just kidding. I know, deploying dynamic, feature rich, robust ( and 50 other adjectives ) apps that use backend and databases is quite tricky. Hence, as a starter we're going to learn how to deploy a simple ( too simple ) ReactJS App.&lt;/p&gt;

&lt;p&gt;Where are we deploying it? GitHub Pages! Yes, GitHub is not only scoped to hosting the project's source code or GitHub Pages to host static websites that are purely HTML5 + CSS3 + JS.&lt;/p&gt;

&lt;p&gt;Where else could you deploy your front-end apps in general?&lt;br&gt;
The list of platforms is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.netlify.com/" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt; (Best for beginners to get started)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://vercel.com/docs/" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; (Best for Advanced projects)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://surge.sh/" rel="noopener noreferrer"&gt;Surge&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Creating a ReactJS App! The Confusion Spiral.
&lt;/h2&gt;

&lt;p&gt;If you're living in 2024, I hope you don't use &lt;code&gt;npx create-react-app &amp;lt;app-name&amp;gt;&lt;/code&gt; to create ReactJS App.&lt;/p&gt;

&lt;p&gt;If you look at the ReactJS's official docs, you'll see that there's no option to create a pure ReactJS App, but they'll insist you to create project using Next.js, Remix, Gatsby and more.&lt;/p&gt;

&lt;p&gt;Why the sudden shift away from npx create-react-app?&lt;br&gt;
Although, it was a fantastic starting point for many React developers, the landscape of frontend development has evolved significantly due it's limitations with respect to the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opinionated Structure&lt;/li&gt;
&lt;li&gt;Difficulty in Customization&lt;/li&gt;
&lt;li&gt;Performance Overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus developers resorted to other and better alternatives which were as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Vite: This build tool has gained immense popularity due to its lightning-fast development server, efficient bundling, and flexibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next.js: While primarily a React framework, it offers a robust set of features for building web applications, including server-side rendering, static site generation, and routing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gatsby: Another popular static site generator built on React, offering performance and SEO benefits.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I now get it that NextJS Apps are in the end ReactJS Apps, because, NextJS is a framework built on-top-of ReactJS library.&lt;/p&gt;

&lt;p&gt;But either way, if you didn't want to create a framework app (NextJS App) but a library app (ReactJS App), ( which is something I did ), you could using Vite build tool to do so.&lt;/p&gt;

&lt;p&gt;Creating a ReactJS App using Vite&lt;br&gt;
You can use the following command in the terminal to create a React App that uses JavaScript ( by default ) in one-go.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you didn't know, React officially supports TypeScript.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm create vite@latest deploy-reactjs-app-with-vite -- --template react
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you could use the following command to create a ReactJS Apps by step-by-step process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm create vite@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Setting the GitHub Up!
&lt;/h2&gt;

&lt;p&gt;Let's create a Remote GitHub Repository. Got to your GitHub Profile and create a remote GitHub Repository and you should be able to see the following interface for an empty repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1mpugzaf94u0tyktrix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1mpugzaf94u0tyktrix.png" alt="Initial Git Repo's Impression" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you go to the Settings =&amp;gt; Pages tab of your GitHub repo, you'll see the following interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv48n41dkocc1ua6cx5t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv48n41dkocc1ua6cx5t9.png" alt="Viewing Branches for our Repo" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It shows either main branch or None. Right now, you don't have to be concerned about it, but be aware that we're going to revisit this page again.&lt;/p&gt;

&lt;p&gt;Once you work on the project, I want you to execute the following commands (sequentially, of course) in your working directory:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initialize an empty git repo on your local system.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Track All file by staging them.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Checkpoint that takes a snapshot of current progress of the stages file(s) in above:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git commit -m "Added Project Files"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Connect local repo (on our system) with the remote repo (the one created on GitHub).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git remote add origin url_of_the_remote_git_repo_ending_with_.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Upload the progress made till our checkpoint from local repo to the remote repo.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git push -u origin main

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you successfully push the changes to the remote repository, you'll the following output in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe7m7jo9kv9m4aqoj48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe7m7jo9kv9m4aqoj48.png" alt="Successful Acknowledgement for Pushing Changes" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if you refresh the GitHub Repository the interface would look something as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvobiqhin03zmdwwj3uko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvobiqhin03zmdwwj3uko.png" alt="Upload Files visible on GitHub" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Installing Dependencies and Crying over Errors:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Installing &lt;code&gt;gh-pages&lt;/code&gt; package
&lt;/h3&gt;

&lt;p&gt;Right now we have just made 1 checkpoint and pushed it to our remote repo. We have NOT started to work on the deployment! YET!&lt;/p&gt;

&lt;p&gt;Head to your working directory in your integrated terminal and fire up the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install gh-pages --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will install and save gh-pages (github-pages) as a dev dependency or our project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dependencies are packages required for the application to run in production. Dev dependencies are packages needed only for development and testing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once completed, the terminal will look as follows (provides some positive acknowledgement):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkohfezh2oilf4s8tiz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkohfezh2oilf4s8tiz6.png" alt="Successful Installation of gh-pages dependency" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Update package.json
&lt;/h3&gt;

&lt;p&gt;You have to add the following code to your package.json file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "homepage": "https://&amp;lt;username&amp;gt;.github.io/&amp;lt;repository&amp;gt;",
    "scripts": {
        "predeploy": "npm run build",
        "deploy": "gh-pages -d build",
        // ... other scripts
    }
    // other scripts
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above scripts are not specified during the project installation via Vite as they are &lt;code&gt;gh-pages&lt;/code&gt; specific scripts.&lt;/p&gt;

&lt;p&gt;The explanation for each is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;homepage&lt;/code&gt;: The &lt;code&gt;"homepage"&lt;/code&gt; field in the &lt;code&gt;package.json&lt;/code&gt; file of a React project specifies the URL at which your app will be hosted. This field is particularly important when deploying a React application to GitHub Pages or any other static site hosting service that serves the site from a subdirectory.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Do update the values of &lt;code&gt;&amp;lt;username&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;repository&amp;gt;&lt;/code&gt; of the homepage property. In my case the values are &lt;code&gt;ShrinivasV73&lt;/code&gt; and &lt;code&gt;Deploy-ReactJS-App-With-Vite&lt;/code&gt; respectively.&lt;/p&gt;

&lt;p&gt;It is good to consider that the value are case-sensitive and should be placed accordingly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;"scripts"&lt;/code&gt; field in &lt;code&gt;package.json&lt;/code&gt; allows you to define custom scripts that can be run using npm run .&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"predeploy": "npm run build"&lt;/code&gt;: This script runs automatically before the deploy script and it triggers the build process of your project.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"deploy": "gh-pages -d build"&lt;/code&gt;: This script is used to deploy your built application to GitHub Pages. It uses the &lt;code&gt;gh-pages&lt;/code&gt; package to publish the contents of the specified directory (build in this case) to the &lt;code&gt;gh-pages&lt;/code&gt; branch of your GitHub repository.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy The App
&lt;/h3&gt;

&lt;p&gt;Now that we've updated scripts as well, it is time to deploy the project. Head to the terminal and fire-up the following command to process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ooryvkmk87iddnu0qxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ooryvkmk87iddnu0qxn.png" alt="Encountering Error During Deployment" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And boom 💥, WE GET AN ERROR!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: ENOENT: no such file or directory, stat '&amp;lt;working-drectory&amp;gt;/build'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;npm (node package manager) is trying to access the folder named &lt;code&gt;build&lt;/code&gt; via the command &lt;code&gt;npm run deploy&lt;/code&gt; which is actually &lt;code&gt;npm run gh-pages -d build&lt;/code&gt; executed behind the scenes.&lt;/p&gt;

&lt;p&gt;But it can't find any.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug Fix: #1 Updating the &lt;code&gt;package.json&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;Remember we didn't create a &lt;code&gt;build&lt;/code&gt; directory by ourselves throughout this journey.&lt;/p&gt;

&lt;p&gt;Vite outputs the build files to a &lt;code&gt;dist&lt;/code&gt; directory by default and not the &lt;code&gt;built&lt;/code&gt; directory, whereas tools like CRA (create-react-apps) use a &lt;code&gt;build&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;( This is exactly where the underlying functions and processes of different build tools manifests. )&lt;/p&gt;

&lt;p&gt;We simply have to replace the &lt;code&gt;build&lt;/code&gt; with &lt;code&gt;dist&lt;/code&gt; in the &lt;code&gt;deploy&lt;/code&gt; script inside the &lt;code&gt;package.json&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
    "homepage": "https://&amp;lt;username&amp;gt;.github.io/&amp;lt;repository&amp;gt;", 
    "scripts": { 
        "predeploy": "npm run build", 
        "deploy": "gh-pages -d dist", 
        // ... other scripts } 
    // other scripts 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bug Fix #2: Updating the vite.config.js
&lt;/h3&gt;

&lt;p&gt;By default your &lt;code&gt;vite.config.js&lt;/code&gt; file looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';

export default defineConfig({
    plugins: [react()],
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make our app functions as expected without any bugs after successful deployment, we need to add &lt;code&gt;base: '/Deploy-ReactJS-App-With-Vite/'&lt;/code&gt; to the object, which is passed to the &lt;code&gt;defineConfig()&lt;/code&gt; method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';

export default defineConfig({
    plugins: [react()],
    base: '/Deploy-ReactJS-App-With-Vite/',
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting the &lt;code&gt;base&lt;/code&gt; property in &lt;code&gt;vite.config.js&lt;/code&gt; is crucial for deploying a Vite-built app to a subdirectory, such as on GitHub Pages. It ensures that all asset paths are correctly prefixed with the subdirectory path, preventing broken links and missing resources.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Our repository is named &lt;code&gt;Deploy-ReactJS-App-With-Vite&lt;/code&gt; and you are deploying it to GitHub Pages. The URL for your site will be &lt;code&gt;https://username.github.io/Deploy-ReactJS-App-With-Vite/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you don’t set the base property, the browser will try to load assets from &lt;code&gt;https://username.github.io/&lt;/code&gt; instead of &lt;code&gt;https://username.github.io/Deploy-ReactJS-App-With-Vite/&lt;/code&gt;, resulting in missing resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Retry Deploying The App
&lt;/h2&gt;

&lt;p&gt;Once you make the necessary changes to &lt;code&gt;package.json&lt;/code&gt; file and &lt;code&gt;vite.config.js&lt;/code&gt; file it's time to git add, commit, push. AGAIN!&lt;/p&gt;

&lt;p&gt;Head to the working directory in the terminal and try deploying the app again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this time when the app actually gets deployed to the Github pages, you'll see again, the positive acknowledgment to the terminal itself as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre2nsxs1c4fi2r7t2faf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre2nsxs1c4fi2r7t2faf.png" alt="Positive Acknowledgement of Successful Deployment" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you refresh your GitHub repo go to the Settings =&amp;gt; Pages tab of it, you'll see a new gh-pages branch added to the branches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq65jsv4pnbgd93oyhmdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq65jsv4pnbgd93oyhmdd.png" alt="Changes Reflected about Branches on GitHub Repo" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How do you access the app on web? Remember the value of homepage property in the package.json file? Yup! That's it.&lt;/p&gt;

&lt;p&gt;In our case, it is &lt;a href="https://ShrinivasV73.github.io/Deploy-ReactJS-App-With-Vite/" rel="noopener noreferrer"&gt;https://ShrinivasV73.github.io/Deploy-ReactJS-App-With-Vite/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've successfully learned how to deploy ReactJS App with Vite on GitHub Pages.&lt;/p&gt;

&lt;p&gt;My recommendation for you folks would be to experiment as follows in different ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create different font-end / full-stack apps like Angular or Vue.js and see how the configurations needs to be updated according to them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create different React Apps like Next.js or Remix or Gatsby&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use different platforms to deploy your front-end applications like vercel or netlify to see which option suits best for which use case.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a nutshell, I started with experimentation and summarised my learning in this article. May be its your time to do so. Cheers!&lt;/p&gt;

&lt;p&gt;If you think that my content is valuable or have any feedback,&lt;br&gt;
do let me by reaching out to my following social media handles that you'll discover in my profile and the follows:&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/shrinivasv73/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shrinivasv73/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Twitter (X): &lt;a href="https://twitter.com/shrinivasv73" rel="noopener noreferrer"&gt;https://twitter.com/shrinivasv73&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instagram: &lt;a href="https://www.instagram.com/shrinivasv73/" rel="noopener noreferrer"&gt;https://www.instagram.com/shrinivasv73/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Email: &lt;a href="mailto:shrinivasv73@gmail.com"&gt;shrinivasv73@gmail.com&lt;/a&gt;&lt;/p&gt;




</description>
      <category>react</category>
      <category>vite</category>
      <category>github</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
