<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arunkumar Amaran</title>
    <description>The latest articles on Forem by Arunkumar Amaran (@amaranarun).</description>
    <link>https://forem.com/amaranarun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amaranarun"/>
    <language>en</language>
    <item>
      <title>How to Optimize BigQuery Costs (Real Techniques That Work)</title>
      <dc:creator>Arunkumar Amaran</dc:creator>
      <pubDate>Wed, 06 May 2026 00:40:36 +0000</pubDate>
      <link>https://forem.com/amaranarun/how-to-optimize-bigquery-costs-real-techniques-that-work-1c0h</link>
      <guid>https://forem.com/amaranarun/how-to-optimize-bigquery-costs-real-techniques-that-work-1c0h</guid>
      <description>&lt;p&gt;I want to start with a number: &lt;strong&gt;$140,000&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's how much a single misconfigured scheduled query cost one of the teams I worked with over a three-month period. Not a rogue machine learning job. Not a massive ETL pipeline. A scheduled query. Running every 15 minutes. Scanning a 4TB table every time. For 90 days before anyone looked at the bill closely enough to notice.&lt;/p&gt;

&lt;p&gt;Nobody wrote that query with malicious intent. The engineer who wrote it was experienced. The table had been small when the query was first created. Then the table grew. The query didn't know that. The bill did.&lt;/p&gt;

&lt;p&gt;I've spent the better part of the last decade building and optimizing data platforms on GCP — including BigQuery at enterprise scale at Macy's Systems &amp;amp; Technology. This post is what I wish someone had handed me early on. Not the official Google documentation version of cost optimization. The actual production version, including the mistakes that end up in the bill.&lt;/p&gt;




&lt;h2&gt;
  
  
  First: Understand What You're Actually Paying For
&lt;/h2&gt;

&lt;p&gt;BigQuery has two pricing models and a lot of people mix them up or don't think about which one they're on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-demand pricing&lt;/strong&gt; charges you per byte scanned. As of 2026, that's around $6.25 per TB in most regions. Every query you run, BigQuery measures how much data it touched and charges accordingly. Write a bad query that scans 10TB when it needed to scan 10GB and you've just paid 1,000x more than necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capacity pricing (flat-rate / editions)&lt;/strong&gt; charges you for reserved compute slots — basically compute reservations — regardless of how much data you scan. If your workloads are predictable and heavy, this can be dramatically cheaper. If your workloads are bursty and light, on-demand is usually better.&lt;/p&gt;

&lt;p&gt;Most teams start on on-demand and never revisit that decision. That's often a mistake. But before you switch, you need to understand your query patterns — and that's what the rest of this post is largely about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Single Biggest Lever: Stop Scanning Data You Don't Need
&lt;/h2&gt;

&lt;p&gt;On-demand BigQuery charges by bytes scanned, not bytes returned. That means the columns you don't &lt;code&gt;SELECT&lt;/code&gt; and the rows you don't filter still cost money if BigQuery has to read past them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Never use &lt;code&gt;SELECT *&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;I know you know this. I also know that half the scheduled queries in your project have &lt;code&gt;SELECT *&lt;/code&gt; in them right now. Go check.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- This scans every column in a 2TB table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;order_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;

&lt;span class="c1"&gt;-- This scans only what you need&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total_amount&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;order_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second query might cost 10-20x less depending on how wide the table is. For a table with 60 columns where you need 4, the math is brutal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use partition pruning — and verify it's working
&lt;/h3&gt;

&lt;p&gt;Partitioned tables are one of BigQuery's most powerful cost controls, but they only work if your query filter actually triggers partition elimination.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- ✅ This WILL prune partitions (uses the partition column directly)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total_amount&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;order_date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-01'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-31'&lt;/span&gt;

&lt;span class="c1"&gt;-- ❌ This will NOT prune partitions (wraps the column in a function)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total_amount&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-01'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-31'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second query looks like it should be equivalent. It isn't — wrapping the partition column in a function forces a full table scan. This is one of the most common and expensive mistakes I see in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to verify:&lt;/strong&gt; Run the query and look at the execution details. If you see "Partitions scanned: 1 of 365" — great, it's working. If you see "Partitions scanned: 365 of 365" on a query filtering a single day — your filter isn't hitting the partition column the way you think it is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster your tables
&lt;/h3&gt;

&lt;p&gt;Partitioning handles time-based filtering. Clustering handles everything else. If you're frequently filtering by &lt;code&gt;customer_id&lt;/code&gt;, &lt;code&gt;product_id&lt;/code&gt;, &lt;code&gt;store_id&lt;/code&gt;, or &lt;code&gt;region&lt;/code&gt; — those columns should be in your clustering key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;CLUSTER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;
&lt;span class="k"&gt;OPTIONS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;require_partition_filter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;TRUE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders_raw`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;require_partition_filter = TRUE&lt;/code&gt; option is worth calling out explicitly. It makes BigQuery reject any query that doesn't include a partition filter. Annoying to developers at first. Saves you from the $140,000 mistake I mentioned at the top.&lt;/p&gt;




&lt;h2&gt;
  
  
  Materialized Views: The Most Underused Feature in BigQuery
&lt;/h2&gt;

&lt;p&gt;Most people know materialized views exist. Few people use them properly for cost optimization.&lt;/p&gt;

&lt;p&gt;Here's the thing about BigQuery materialized views that isn't obvious: &lt;strong&gt;they can answer queries even when the query doesn't directly reference them.&lt;/strong&gt; BigQuery's query optimizer will automatically rewrite a query against a base table to use a materialized view instead, if the view can satisfy the query and is more efficient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Create the materialized view&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;MATERIALIZED&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.daily_sales_mv`&lt;/span&gt;
&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;sale_date&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;sale_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;store_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;product_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;             &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;total_units&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total_amount&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;         &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;total_revenue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;DISTINCT&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;order_count&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here's where it gets interesting. A dashboard query that does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total_amount&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;revenue&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.orders`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...might get automatically rewritten to query the materialized view instead of the full orders table — even though the query never mentioned the materialized view. That's BigQuery doing the cost optimization for you, provided you've set things up right.&lt;/p&gt;

&lt;p&gt;The maintenance overhead is minimal. The cost reduction on heavily queried aggregations can be 80-95%.&lt;/p&gt;




&lt;h2&gt;
  
  
  Query Cost Estimation Before You Run
&lt;/h2&gt;

&lt;p&gt;This one sounds obvious but is skipped constantly: estimate cost before running unfamiliar queries, especially on large tables.&lt;/p&gt;

&lt;p&gt;In BigQuery console, every query shows a bytes-to-be-processed estimate in the top right before execution. Pay attention to it.&lt;/p&gt;

&lt;p&gt;In code, you can do a dry run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;job_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;QueryJobConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;use_query_cache&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    SELECT *
    FROM `project.dataset.large_table`
    WHERE region = &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;US&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;job&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;job_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;job_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Bytes processed if this query ran
&lt;/span&gt;&lt;span class="n"&gt;bytes_processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;total_bytes_processed&lt;/span&gt;
&lt;span class="n"&gt;gb_processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bytes_processed&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;estimated_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes_processed&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;6.25&lt;/span&gt;  &lt;span class="c1"&gt;# $6.25 per TB
&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;This query would scan: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;gb_processed&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; GB&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Estimated cost: $&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;estimated_cost&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build this into your data pipeline development workflow. Make developers run a dry-run estimate before merging new queries into production. It takes 30 seconds. It occasionally saves you $10,000.&lt;/p&gt;




&lt;h2&gt;
  
  
  Slots, Reservations, and When Flat-Rate Actually Makes Sense
&lt;/h2&gt;

&lt;p&gt;The on-demand vs. flat-rate decision is one that most teams make once and never revisit. That's a mistake because the right answer changes as your workload matures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-demand is usually right when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your query volume is unpredictable or spiky&lt;/li&gt;
&lt;li&gt;You're early-stage and still figuring out your workload patterns&lt;/li&gt;
&lt;li&gt;Your monthly BigQuery bill is under ~$2,000&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Flat-rate (Editions) is usually right when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have consistent, heavy query workloads running throughout the day&lt;/li&gt;
&lt;li&gt;You're running large-scale batch processing jobs on a schedule&lt;/li&gt;
&lt;li&gt;Your on-demand bill has stabilized and is consistently above ~$3,000/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The crossover point depends on your region and edition tier, but a rough rule: if you're consistently buying more than 500 slot-hours per month at on-demand prices, flat-rate reservations are worth modeling out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One trap to avoid:&lt;/strong&gt; Buying flat-rate capacity and then not using it. Idle reserved slots cost the same as busy ones. If you're going to commit to flat-rate, you need to actually fill those slots — which means scheduling your heavy batch jobs during off-peak hours to maximize slot utilization.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Check your slot utilization in INFORMATION_SCHEMA&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;TIMESTAMP_TRUNC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period_start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HOUR&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period_slot_ms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;slot_hours_used&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`region-us`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFORMATION_SCHEMA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TIMELINE_BY_PROJECT&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;period_start&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMP_SUB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;CURRENT_TIMESTAMP&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="k"&gt;DAY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this before committing to any reservation purchase. If you're averaging 40 slot-hours per hour during business hours and near-zero overnight, a flat-rate reservation sized for your peak will be mostly idle.&lt;/p&gt;




&lt;h2&gt;
  
  
  INFORMATION_SCHEMA: Your Cost Debugging Best Friend
&lt;/h2&gt;

&lt;p&gt;BigQuery's &lt;code&gt;INFORMATION_SCHEMA&lt;/code&gt; views are where you find the expensive queries. Most people don't use them nearly enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Find your most expensive queries from the last 30 days
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;job_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;user_email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ROUND&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total_bytes_processed&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;POW&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;tb_processed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ROUND&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;total_bytes_processed&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;POW&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;estimated_cost_usd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;SUBSTR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                            &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;query_preview&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;creation_time&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`region-us`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFORMATION_SCHEMA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JOBS_BY_PROJECT&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;creation_time&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMP_SUB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;CURRENT_TIMESTAMP&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="k"&gt;DAY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;job_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'QUERY'&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="k"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'DONE'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;total_bytes_processed&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this query once a week. The top 5 results are almost always where your money is going. In my experience, fixing the top 3 queries on this list typically cuts the bill by 30-50%.&lt;/p&gt;

&lt;h3&gt;
  
  
  Find queries run by service accounts that are scanning too much
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;user_email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;query_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ROUND&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total_bytes_processed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;POW&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;total_tb_processed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ROUND&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total_bytes_processed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;POW&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;total_cost_usd&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`region-us`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFORMATION_SCHEMA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JOBS_BY_PROJECT&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;creation_time&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMP_SUB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;CURRENT_TIMESTAMP&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="k"&gt;DAY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;job_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'QUERY'&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;user_email&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;total_tb_processed&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service accounts running scheduled queries are frequently the biggest offenders because nobody's watching them. A dashboard refresh that runs every 15 minutes against an unpartitioned table is 2,880 queries per month. Do the math on what each of those costs and multiply.&lt;/p&gt;




&lt;h2&gt;
  
  
  Caching: Free Is Good, When It Works
&lt;/h2&gt;

&lt;p&gt;BigQuery caches query results for 24 hours. Identical queries (same SQL, same referenced tables, no changes to underlying data) return cached results at zero cost.&lt;/p&gt;

&lt;p&gt;This sounds great. It has a few gotchas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching doesn't work if your query uses non-deterministic functions like &lt;code&gt;CURRENT_TIMESTAMP()&lt;/code&gt;, &lt;code&gt;NOW()&lt;/code&gt;, or &lt;code&gt;RAND()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Caching doesn't work if you explicitly disable it&lt;/li&gt;
&lt;li&gt;Caching doesn't work across projects by default&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest practical implication: &lt;strong&gt;don't put &lt;code&gt;CURRENT_DATE()&lt;/code&gt; or &lt;code&gt;CURRENT_TIMESTAMP()&lt;/code&gt; directly in scheduled queries if you can help it.&lt;/strong&gt; Pass the date as a parameter instead. An identical parameterized query against a table that hasn't changed since the last run will hit the cache. A query with an inline &lt;code&gt;CURRENT_DATE()&lt;/code&gt; won't.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- ❌ This will never hit cache (current_date() makes it non-deterministic)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;revenue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.daily_sales`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;sale_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;CURRENT_DATE&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;

&lt;span class="c1"&gt;-- ✅ Pass the date as a scripting variable or scheduled query parameter&lt;/span&gt;
&lt;span class="k"&gt;DECLARE&lt;/span&gt; &lt;span class="n"&gt;run_date&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;revenue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`project.dataset.daily_sales`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;sale_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;run_date&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;store_id&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Cost Optimization Mindset Shift
&lt;/h2&gt;

&lt;p&gt;Here's the thing that took me the longest to internalize: BigQuery cost optimization is not a one-time project. It's a continuous practice.&lt;/p&gt;

&lt;p&gt;Tables grow. Query patterns change. New analysts join the team and write queries the way they're used to writing queries in other systems. A dashboard that was cheap six months ago isn't necessarily cheap today because the underlying table is now 10x larger.&lt;/p&gt;

&lt;p&gt;The teams that control their BigQuery costs over the long term are the ones that have made it a habit — weekly cost reviews, dry-run estimates as part of code review, INFORMATION_SCHEMA queries on a dashboard that someone actually looks at.&lt;/p&gt;

&lt;p&gt;The teams that get surprised by the bill are the ones that set it up and assumed it would stay efficient on its own.&lt;/p&gt;

&lt;p&gt;It won't. But with the right habits, it's absolutely manageable — and the savings compound faster than you'd expect.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Reference Checklist
&lt;/h2&gt;

&lt;p&gt;Before you ship any new BigQuery query or table to production, run through this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] No &lt;code&gt;SELECT *&lt;/code&gt; — only columns you need&lt;/li&gt;
&lt;li&gt;[ ] Table is partitioned on the most common filter column&lt;/li&gt;
&lt;li&gt;[ ] Partition filter is on the raw column, not wrapped in a function&lt;/li&gt;
&lt;li&gt;[ ] Clustering keys match your most common &lt;code&gt;WHERE&lt;/code&gt; and &lt;code&gt;JOIN&lt;/code&gt; conditions&lt;/li&gt;
&lt;li&gt;[ ] &lt;code&gt;require_partition_filter = TRUE&lt;/code&gt; for large tables&lt;/li&gt;
&lt;li&gt;[ ] Dry-run estimate reviewed before merging&lt;/li&gt;
&lt;li&gt;[ ] Materialized view considered for any aggregation queried more than daily&lt;/li&gt;
&lt;li&gt;[ ] No &lt;code&gt;CURRENT_TIMESTAMP()&lt;/code&gt; inline if caching matters&lt;/li&gt;
&lt;li&gt;[ ] Scheduled query frequency matches actual business need (not "every 15 minutes because that was the default")&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;The $140,000 story at the top of this post isn't the worst one I've heard. It's not even close. BigQuery cost disasters tend to be quiet — they accumulate over weeks or months before anyone notices, and by then the money is gone.&lt;/p&gt;

&lt;p&gt;The good news: almost every expensive pattern is avoidable, and most of them are avoidable with techniques that also make your queries faster. Partition pruning, clustering, materialized views, proper column selection — these aren't just cost controls. They're good engineering.&lt;/p&gt;

&lt;p&gt;Start with INFORMATION_SCHEMA. Find your top 5 most expensive queries. Fix them. Repeat monthly. You'll be surprised how quickly the bill responds.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have a BigQuery cost story of your own? Drop it in the comments — the good ones are always educational (and occasionally horrifying).&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Arunkumar Amaran&lt;/strong&gt; — Tech Manager, Data Engineering &amp;amp; Architecture at Macy's Systems &amp;amp; Technology. 20+ years building enterprise data platforms on GCP, AWS, and Azure. IEEE Senior Member. Writes about the things that don't make it into the official docs.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://linkedin.com/in/arunkumar-amaran" rel="noopener noreferrer"&gt;linkedin.com/in/arunkumar-amaran&lt;/a&gt; · 📧 &lt;a href="mailto:arunkumar.amaran@ieee.org"&gt;arunkumar.amaran@ieee.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Opinions are my own and do not represent my employer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>bigquery</category>
      <category>dataengineering</category>
      <category>googlecloud</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
