<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Charlotte Towell</title>
    <description>The latest articles on Forem by Charlotte Towell (@charlottetowell).</description>
    <link>https://forem.com/charlottetowell</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/charlottetowell"/>
    <language>en</language>
    <item>
      <title>Desktop Music Player Using QT for Python - Built with Github Copilot CLI</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Sat, 31 Jan 2026 03:43:45 +0000</pubDate>
      <link>https://forem.com/charlottetowell/desktop-music-player-using-qt-for-python-built-with-github-copilot-cli-45mk</link>
      <guid>https://forem.com/charlottetowell/desktop-music-player-using-qt-for-python-built-with-github-copilot-cli-45mk</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I have been really into media ownership lately. I have started &lt;a href="https://dev.to/charlottetowell/digitising-vinyls-using-audacity-a-raspberry-pi-54o7"&gt;digitising my vinyl collection this year (using a raspberry pi!)&lt;/a&gt; with the goal of abandoning Spotify. Through this, I discovered foobar2000 - a freeware audio player for local music files.&lt;/p&gt;

&lt;p&gt;While it's good, I decided I wanted to customise my own player to have full control over the style and keep the functionality limited to my needs. Yes I could make a custom skin, but I've never ventured into desktop development before and in the current age of AI fully customised software is more accessible than ever so why not make my own from scratch!&lt;/p&gt;

&lt;p&gt;For this challenge, I built a customised audio player desktop application - specifically with an audio visualiser feature so that I can play it on a mini screen in my PC set-up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/lY0YiOOjKnM"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Some of the key features of the application are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;viewing your library by album, artist, year or folder&lt;/li&gt;
&lt;li&gt;integrated media controls using keyboard shortctus like space bar for play &amp;amp; pause&lt;/li&gt;
&lt;li&gt;queue management including the ability to go back to previously played tracks&lt;/li&gt;
&lt;li&gt;detecting most supported audio files from a set destination folder&lt;/li&gt;
&lt;li&gt;the audio visualiser + mini-player which are my personal favs&lt;/li&gt;
&lt;li&gt;both windows &amp;amp; linux support (or at least Raspberry Pi OS which is what I tested myself)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check out the github repo here:&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/charlottetowell" rel="noopener noreferrer"&gt;
        charlottetowell
      &lt;/a&gt; / &lt;a href="https://github.com/charlottetowell/desktop-music-player" rel="noopener noreferrer"&gt;
        desktop-music-player
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A desktop music player for local audio files - For the Github Copilot CLI Challenge
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Desktop Music Player&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A desktop audio player for Windows &amp;amp; Linux with real-time visualization, built with Python and Qt.&lt;/p&gt;
&lt;p&gt;Built for the &lt;a href="https://dev.to/challenges/github-2026-01-21" rel="nofollow"&gt;Github Copilot CLI Challenge&lt;/a&gt; hosted by dev.to. See my entry blog + &lt;strong&gt;demo&lt;/strong&gt; here: &lt;a href="https://dev.to/charlottetowell/desktop-music-player-using-qt-for-python-built-with-github-copilot-cli-45mk" rel="nofollow"&gt;Desktop Music Player Using QT for Python - Built with Github Copilot CLI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In this README:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/charlottetowell/desktop-music-player#features-" rel="noopener noreferrer"&gt;Features 🎶&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/charlottetowell/desktop-music-player#tech-stack-" rel="noopener noreferrer"&gt;Tech Stack 💻&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/charlottetowell/desktop-music-player#running-locally" rel="noopener noreferrer"&gt;Running Locally&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/charlottetowell/desktop-music-player#desktop-installation" rel="noopener noreferrer"&gt;Desktop Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/charlottetowell/desktop-music-player#project-structure" rel="noopener noreferrer"&gt;Project Structure&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features 🎶&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Platform&lt;/strong&gt;: Windows &amp;amp; Linux support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS Media Key Integration&lt;/strong&gt;: Control playback with keyboard media keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mini player&lt;/strong&gt;: A pop-out mini player window with audio waveform visualisation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio Playback&lt;/strong&gt;: Full playback engine with controls and playback history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Queue Management&lt;/strong&gt;: Add, reorder, and remove tracks with drag-and-drop&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Library Scanner&lt;/strong&gt;: Auto-discover audio files with metadata extraction (MP3, FLAC, WAV, OGG, M4A, AAC)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Audio Visualizer&lt;/strong&gt;: Waveform display with smooth animations&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Tech Stack 💻&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UI&lt;/strong&gt;: PySide6 (Qt for Python)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio&lt;/strong&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/charlottetowell/desktop-music-player" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;




&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;tldr;&lt;/strong&gt; lots of planning for context + very autonoumous copilot == fast iteration to usability!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My usual experience with AI-assisted coding is in more of a hands-on, keep a short leash kind of methodology. Iterating fast on very small scoped micro-tasks back and forward with the models.&lt;/p&gt;

&lt;p&gt;For this project, to give Github Copilot CLI a real test, I approached it very differently by essentially staying in planning &amp;amp; design mode and letting it do all the heavy lifting with programming itself.&lt;/p&gt;

&lt;p&gt;I have also never built a desktop application before. Most of my work is in web development or backend scripts so it was exciting to see how fast this brand new tooling can come together with the help of AI.&lt;/p&gt;

&lt;p&gt;I approached my workflow with Github Copilot CLI in four stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Planning &amp;amp; Design + Initial Project Setup&lt;/li&gt;
&lt;li&gt;Rapidly Building Out Features &amp;amp; Functionality (Ignoring UI)&lt;/li&gt;
&lt;li&gt;Design Refinement - Layout &amp;amp; Styling Polish&lt;/li&gt;
&lt;li&gt;Finalisation - Build + Documentation&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Day One 🗓️
&lt;/h4&gt;

&lt;p&gt;On day one, I spent my time drafting up the features I wanted to include, and working on a rough (to be replaced) design in figma. I used different AI models such as Gemini to teach myself an intro to desktop application development, which frameworks or libraries to use, and common gotchas to consider such as performance, all to inform my context for the project scope.&lt;/p&gt;

&lt;p&gt;From this, I wrote my initial system prompt, chucked this back at AI once more, and resulted in my final &lt;a href="https://github.com/charlottetowell/desktop-music-player/blob/f013c7c4c381bb20b5c1a5ea64751ba0a1de8a44/.github/copilot-instructions.md" rel="noopener noreferrer"&gt;&lt;code&gt;copilot-instructions.md&lt;/code&gt;&lt;/a&gt; file.&lt;/p&gt;

&lt;p&gt;This initial work set a good stage as I then prompted Copilot CLI with a simple prompt to use its instructions and spin up the skeleton of our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;As per the repository's copilot instructions, create the initial skeleton of the project including folder structures and a README explaining how to run locally
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2njv30rewhntnxzs7dyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2njv30rewhntnxzs7dyb.png" alt="Empty repository with github copilot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Day Two 🗓️
&lt;/h4&gt;

&lt;p&gt;Day two was all about building out the core functionality and my feature wish-lists. I ignored design for the moment here and focused purely on functionality. This consisted of giving Copilot more in-depth prompts about each specific function and then letting it roll to build it out in full. &lt;/p&gt;

&lt;p&gt;This was similar to my usual back and forward approach with AI but I found Copilot was competent enough to complete features in full a lot of the time when given a detailed prompt, with often only one follow-up needed to address any bugs found from a short test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpha683rvkjm5utlkjaim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpha683rvkjm5utlkjaim.png" alt="Day 2 Functiionality Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Day Three 🗓️
&lt;/h4&gt;

&lt;p&gt;Day three was when I decided I was done with my initial design in figma and wanted to entirely do over. Which was great considering I spent the prior session ignoring design anyway. I did however keep a peach element in the logo as a nod to my original idea.&lt;/p&gt;

&lt;p&gt;After creating a mock-up of my new design, I then uploaded screenshots to my repo and instructed Copilot to update the styling to match.&lt;/p&gt;

&lt;p&gt;My findings here is that Copilot is not fantastic at handling an overall large design brief and then applying globally to the application. However, when briefing smaller components at a time (eg. left panel, middle panel, right panel) and complementing with some written context, it does a passable job at scaffolding out the desired layout to match the design, and certainly better to prompting a model with no image input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y4m4kote35rwrpswy9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y4m4kote35rwrpswy9n.png" alt="Screenshot of figma designs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Finally 🏆
&lt;/h4&gt;

&lt;p&gt;At last, once the app was in a good-enough looking state for me, I used Copilot to help with the final project tasks such as compiling the application for download, and creating clear documentation. It even managed to debug some installation issues including with the librosa library using lazy loading which wasn't compatible with our built application.&lt;/p&gt;

&lt;p&gt;For installation, Copilot generated a &lt;code&gt;build.spec&lt;/code&gt; file and windows + linux scripts to build the application using PyInstaller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This was a fun little project to make, and not something I would have attempted without AI in a format I'm unfamiliar with. Although there is a million features &amp;amp; improvements I could add, I am quite satisified with it meeting the use case I wanted which was ultimately to display a cute waveform on a screen in my PC set up. I think the power of AI is not just in commercial use-cases, but in enabling personal side projects like this to rapidly get to a point of completion where you can create fully customised software for everyday life.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F341gn3496mqbpyxtq0u5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F341gn3496mqbpyxtq0u5.jpg" alt="Image of my desk with desktop music player on mini screen"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Digitising Vinyls Using Audacity &amp; a Raspberry Pi</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 27 Jan 2026 10:54:10 +0000</pubDate>
      <link>https://forem.com/charlottetowell/digitising-vinyls-using-audacity-a-raspberry-pi-54o7</link>
      <guid>https://forem.com/charlottetowell/digitising-vinyls-using-audacity-a-raspberry-pi-54o7</guid>
      <description>&lt;p&gt;My latest side project has been digitising my vinyl collection to ultimately abandon Spotify by the end of the year.&lt;/p&gt;

&lt;p&gt;Is this something I could have achieved by just plugging my laptop into the record player? Of course, but that comes with a number of limitations, mainly so - lack of portability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter - my Raspberry Pi
&lt;/h2&gt;

&lt;p&gt;Small, nimble, and perfect to sit on the back of the shelf unassuming. Having a micro-computer really shines in situations where the &lt;em&gt;micro&lt;/em&gt; aspect can come in handy. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Recipe
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1 USB Enabled Record Player (Mine is the &lt;a href="https://www.audio-technica.com/en-us/at-lp60xusb" rel="noopener noreferrer"&gt;Audio Technica LP60XUSB&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;1 Raspberry Pi with at least a USB port&lt;/li&gt;
&lt;li&gt;1 USB-B to USB-A cable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; - Boot your Raspberry Pi&lt;br&gt;
First things first, we assume your raspberry pi is already booted with something like Raspberry Pi OS, conected to your network, and SSH enabled. I do a headless set-up following &lt;a href="https://www.tomshardware.com/reviews/raspberry-pi-headless-setup-how-to,6028.html" rel="noopener noreferrer"&gt;this&lt;/a&gt; Tom's hardware guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; - Set up VNC&lt;br&gt;
If you've set up SSH correctly, this is as easy as installing a VNC viewer client like &lt;a href="https://tigervnc.org/" rel="noopener noreferrer"&gt;TigerVNC&lt;/a&gt; on another device of choice, then connecting with your SSH credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; - Install Audacity&lt;br&gt;
Next, install &lt;a href="https://www.audacityteam.org/" rel="noopener noreferrer"&gt;Audacity&lt;/a&gt; on your pi &amp;amp; main device - it's a free audio recording and editor software. You can opt to only install it on the pi and do your editing directly via VNC, but I prefer transferring the raw files over and using the power of my desktop to split tracks.&lt;/p&gt;

&lt;p&gt;Now you're ready to record! &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Top tip: using a VNC viewer app on your phone makes it easier to walk around the house listening to the record and being ready to stop or pause the recording when you need to flip sides!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that you've recorded both sides of a record. The next step is to export the full (unsplit) audio as a &lt;code&gt;.wav&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;I then use &lt;code&gt;scp&lt;/code&gt; to transfer these files onto my main desktop for editing. Example usage of this (assuming current directory is something like &lt;code&gt;raw-vinyl-albums&lt;/code&gt;:&lt;br&gt;
&lt;code&gt;scp &amp;lt;user&amp;gt;@raspberrypi.local:Music/raw/&amp;lt;album_name&amp;gt;.wav .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv624yms02hhx5wwejf5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv624yms02hhx5wwejf5e.png" alt="Screenshot of Audacity" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have the the full album recording locally, you can use Audacity on your desktop to split tracks, export as mp3, add metadata &amp;amp; album art &lt;a href="https://www.mp3tag.de/en/" rel="noopener noreferrer"&gt;(check out MP3 tag)&lt;/a&gt; and transfer your new audio files to as many devices as you want since you own the files!&lt;/p&gt;

&lt;p&gt;Yay for media ownership! And for integrating a raspberry pi into my vinyl setup!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72nm3thjp8mszv4i3db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72nm3thjp8mszv4i3db.png" alt="Screenshot of Foobar2000 - a desktop audio player" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>vinyl</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Using Terraform to Configure BigQuery Data Transfer Service for Google Ads &amp; GA4</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 13 Jan 2026 05:32:17 +0000</pubDate>
      <link>https://forem.com/charlottetowell/using-terraform-to-configure-bigquery-data-transfer-service-for-google-ads-ga4-4347</link>
      <guid>https://forem.com/charlottetowell/using-terraform-to-configure-bigquery-data-transfer-service-for-google-ads-ga4-4347</guid>
      <description>&lt;p&gt;The benefit of staying within the Google ecosystem shines through when it comes to syncing data into BigQuery with the &lt;a href="https://docs.cloud.google.com/bigquery/docs/dts-introduction" rel="noopener noreferrer"&gt;Data Transfer Service&lt;/a&gt;. Rather than building our own API-based integrations and handle scheduling, it is almost* completely handled for us in the case of Google Ads &amp;amp; GA4!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note the *almost&lt;/em&gt;. For an e-commerce use case, an acceptable delay can be to always have yesterday's data available. Unfortunately when it comes to BQDTS, we are working moreso with a 2-day up to date window on the automated syncs.&lt;/p&gt;

&lt;p&gt;With a lot of pulling my hair out and trying to figure out if a timezone conversion was failing me I came to the discovery that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduled data transfers can only do as frequent as 2 days in the past&lt;/li&gt;
&lt;li&gt;Manually triggered backfills for a 'date range' has this same limitation&lt;/li&gt;
&lt;li&gt;Manually triggered backfills for 'run one-time transfer' magically do pull through the latest data available up until the current date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, using a combination of the normal scheduled sync + an automated 'manual' triggered sync, we can achieve our yesterday's-data availability method. To do this, I used &lt;a href="https://dev.to/charlottetowell/self-scheduling-recurring-cloud-tasks-with-terraform-python-code-50p9"&gt;self scheduling cloud tasks&lt;/a&gt; which I recently posted about to handle the daily 'one off transfer' sync.&lt;/p&gt;

&lt;p&gt;BQDTS doesn't just run the one latest day per schedule but rather refreshes up to 7 days in the past each run, so we can be assured that our partial data will always be updated in full the following day.&lt;/p&gt;

&lt;p&gt;This system has been running well for a little while for me now. A slight hassle? absolutely. But I still much prefer the ease of using Google's transfer service to handle all my data processing without writing the integration myself from scratch.&lt;/p&gt;

&lt;p&gt;Without further ado, here are the code examples used to make this run:&lt;/p&gt;




&lt;h2&gt;
  
  
  Create the destination dataset
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_bigquery_dataset" "skipper_au_google_analytics_ga4" {
  dataset_id                 = "skipper_au_google_analytics_ga4"
  friendly_name             = "Skipper Dataset for Google Analytics GA4 - AU"
  description               = "Dataset for Skipper Google Analytics GA4 BigQuery Data Transfer Service Integration"
  location                  = "australia-southeast1"
  project                   = var.project_id
  default_table_expiration_ms = null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure BigQuery Data Transfer Service with an 'every day' frequency
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_bigquery_data_transfer_config" "google_analytics__ga4_transfer" {
  project                = var.project_id
  display_name           = "Google Analytics GA4 Transfer"
  location               = var.region
  data_source_id         = "ga4"
  destination_dataset_id = google_bigquery_dataset.skipper_au_google_analytics_ga4.dataset_id
  schedule               = "every day 18:00" #5AM AEST
  service_account_name   = var.bq_data_transfer_sa_email

  params = {
    property_id           = var.google_analytics_property_id
    table_filter          = "Audiences,DemographicDetails,EcommercePurchases,Events,LandingPage,PagesAndScreens,Promotions,TechDetails,TrafficAcquisition,UserAcquisition"
  }

  depends_on = [google_bigquery_dataset.skipper_au_google_analytics_ga4]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now, the trick to enable our latest day stats coming through:&lt;/p&gt;

&lt;h2&gt;
  
  
  Call the manual transfer run via API
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bigquery_datatransfer_v1&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;manual_bigquery_data_transfer_run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TRANSFER_CONFIG_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery_datatransfer_v1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataTransferServiceClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;LOCATION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;location&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;projects/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;GCP_PROJECT_ID&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/locations/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;LOCATION&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/transferConfigs/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;TRANSFER_CONFIG_ID&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


    &lt;span class="c1"&gt;# Use proper UTC+10 timezone object
&lt;/span&gt;    &lt;span class="n"&gt;tz_utc_plus_10&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hours&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;now_utc10&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tz_utc_plus_10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;run_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;now_utc10&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;minutes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery_datatransfer_v1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StartManualTransferRunsRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;requested_run_time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;run_time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_manual_transfer_runs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>gcp</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>bigquery</category>
    </item>
    <item>
      <title>Self-Scheduling Recurring Cloud Tasks (with Terraform + Python code)</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Mon, 12 Jan 2026 09:48:22 +0000</pubDate>
      <link>https://forem.com/charlottetowell/self-scheduling-recurring-cloud-tasks-with-terraform-python-code-50p9</link>
      <guid>https://forem.com/charlottetowell/self-scheduling-recurring-cloud-tasks-with-terraform-python-code-50p9</guid>
      <description>&lt;p&gt;Using a fully serverless cloud set-up, sometimes the easy questions like: "how can I run this function on a recurring basis?" are not so easy to answer. CRON job? nope.&lt;/p&gt;

&lt;p&gt;Instead, this is how I achieve it utilising scheduled Cloud Tasks for a fully serverless approach.&lt;/p&gt;

&lt;p&gt;Let's say we have a Cloud Run Function we want to call on a daily, or perhaps weekly, basis. The process we'll follow is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a Cloud Tasks queue&lt;/li&gt;
&lt;li&gt;Set up the function to reschedule itself&lt;/li&gt;
&lt;li&gt;Create our initial task to set off the chain&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Service Account
&lt;/h3&gt;

&lt;p&gt;First, we create the service account used to run our cloud function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_service_account" "&amp;lt;my_sa&amp;gt;" {
  account_id   = "&amp;lt;my_sa&amp;gt;"
  display_name = "My Service Account"
  description  = "Service account used for my function"
  project      = var.project_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we add all required roles, specifically those for Cloud Tasks &amp;amp; invoking Cloud Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_project_iam_member" "&amp;lt;my_sa&amp;gt;" {
  for_each = toset([
    "roles/run.invoker",
    "roles/cloudtasks.enqueuer"
  ])

  project = var.project_id
  role    = each.value
  member  = "serviceAccount:${google_service_account.&amp;lt;my_sa&amp;gt;.email}"

  depends_on = [google_service_account.&amp;lt;my_sa&amp;gt;]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and finally, we also need to allow the service account to impersonate itself for queuing the next cloud task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_service_account_iam_member" "&amp;lt;my_sa_self_impersonation&amp;gt;" {
  service_account_id = google_service_account.&amp;lt;my_sa&amp;gt;.name
  role               = "roles/iam.serviceAccountUser"
  member             = "serviceAccount:${google_service_account.&amp;lt;my_sa&amp;gt;.email}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cloud Task Queue
&lt;/h3&gt;

&lt;p&gt;Next, create a cloud task queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_cloud_tasks_queue" "&amp;lt;my_task_queue_nane&amp;gt;" {
  name     = "&amp;lt;my_task_queue_name&amp;gt;"
  location = var.region

  http_target {
    http_method = "POST"
    oidc_token {
      service_account_email = &amp;lt;my_sa_email&amp;gt;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Queue Task from Function
&lt;/h3&gt;

&lt;p&gt;Then, we can add a final bit of code to the end of our function to allow scheduling the next task. This is where we will set any rules like how long to schedule (1 day ahead, 1 week, etc), and what time of day to run at for example. Any shared data to be passed along must be passed as the HTTP body, and the function entry set up to parse this accordingly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;cloud_task_scheduler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;days_increment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tasks_v2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CloudTasksClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queue_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GCP_PROJECT_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LOCATION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CLOUD_TASK_QUEUE_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;FUNCTION_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;my_cloud_function_url&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="c1"&gt;# define next task
&lt;/span&gt;        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;schedule_next_cloud_task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;body_bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;scheduled_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;days_increment&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;minute&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;second&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;microsecond&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;timestamp_pb2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FromDatetime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scheduled_date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tasks_v2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreateTaskRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http_request&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http_method&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tasks_v2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HttpMethod&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;POST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;FUNCTION_URL&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;body_bytes&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;schedule_time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# add task to queue
&lt;/span&gt;        &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error scheduling next Cloud Task: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Schedule the First Task
&lt;/h3&gt;

&lt;p&gt;To set the chain off running, we must schedule the first cloud task. Simplest way is via the command line with Google Cloud CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud tasks create-http-task my-first-task &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-queue &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"my-function-url"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;POST &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--body-content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"schedule_next_cloud_task": true}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Content-Type"&lt;/span&gt;:&lt;span class="s2"&gt;"application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--oidc-service-account-email&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"my-sa-email"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--location&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--schedule-time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YYYY-MM-DDTHH:MM:SS+00:00"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like you should see your first task sitting in the queue waiting to run! Of course, I reccomend testing first to ensure that your function can run fully and then create irs following task successfully.&lt;/p&gt;




&lt;p&gt;This approach can be used for a daily schedule like I've outlined above or even to split up long-running tasks that may exceed timeouts by either passing shared data through the body of the task request or writing to a database&lt;/p&gt;

&lt;p&gt;The benefit is because the cost of Cloud Tasks is basically nothing, or literally free if you are below certain usage tiers, then this method is extremely efficient as we only pay for compute while our function is running and it can effectively scale to zero while waiting for the next cloud task to run.&lt;/p&gt;

&lt;p&gt;Hopefully this helps anyone else after a simple approach of implementing a self-sustainaing schedule with serverless compute efficiently!&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>terraform</category>
      <category>python</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Git Integrated Workflow for Shopify Theme Development</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Mon, 12 Jan 2026 09:26:01 +0000</pubDate>
      <link>https://forem.com/charlottetowell/git-integrated-workflow-for-shopify-theme-development-44l9</link>
      <guid>https://forem.com/charlottetowell/git-integrated-workflow-for-shopify-theme-development-44l9</guid>
      <description>&lt;p&gt;The very first port of call when onboarding to any ecommerce business using Shopify should be to ask the single question: how do you back-up your theme code?&lt;/p&gt;

&lt;p&gt;And if the answer isn't git, you've now got your first task!&lt;/p&gt;

&lt;p&gt;Below is the end-to-end workflow I now use to manage our Shopify theme versioning via the &lt;a href="https://shopify.dev/docs/storefronts/themes/tools/github" rel="noopener noreferrer"&gt;Shopify Github Integration&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shopify CLI
&lt;/h2&gt;

&lt;p&gt;Other than the obvious like git, you'll want to make sure you also have &lt;a href="https://shopify.dev/docs/api/shopify-cli" rel="noopener noreferrer"&gt;Shopify CLI&lt;/a&gt; installed so you can get started with &lt;code&gt;shopify theme pull&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Branches
&lt;/h2&gt;

&lt;p&gt;Unlike most software development which generally has a single release candidate, ecommerce stores often go through different themes correlating to campaigns or product releases. For this reason, I don't use the &lt;code&gt;main&lt;/code&gt; branch as our live theme but rather the most up-to-date stable version of the core theme.&lt;/p&gt;

&lt;p&gt;Instead, I opt for the following branching strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;development&lt;/code&gt; - current development branch&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;review&lt;/code&gt; - "Review Only" theme synced to Shopify for other team-mates to review&lt;/li&gt;
&lt;li&gt;&lt;code&gt;main&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;live-theme&lt;/code&gt; - current live theme in Shopify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that these branches are not necessarily persistent (except for &lt;code&gt;main&lt;/code&gt;) but are usually like &lt;code&gt;live-theme-campaign-X&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This allows asynchronous development and marketing reviews without impacting our current live theme, as well as the ability to prepare for future theme changes like sale events. By ensuring that the currently published theme is connected to a branch, we retain the ability to push any urgent updates or bug fixes while still allowing any direct edits to be made in Shopify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pull Requests
&lt;/h2&gt;

&lt;p&gt;Pull requests are always made to the &lt;code&gt;main&lt;/code&gt; branch, documenting the primary line of changes &amp;amp; updates. Updates to a live theme are still first merged to &lt;code&gt;main&lt;/code&gt;, and then &lt;code&gt;main&lt;/code&gt; merged into the &lt;code&gt;live-theme&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tagging
&lt;/h2&gt;

&lt;p&gt;An optional step which we implement is creating tags to label key releases, usually the accumulation of many PR merges to main at the point where it becomes a new live theme at the beginning of a campaign. For this we have another branch &lt;code&gt;bau&lt;/code&gt; which we merge to from &lt;code&gt;main&lt;/code&gt; via a PR, and a Github action that auto-creates the tag. The main point of this is more for historical tracking of themes mapped to campaigns, but is a great way to mark available backup points.&lt;/p&gt;




&lt;p&gt;This is the process that works for my small team where it is primarily me working on the theme with some input from other colleagues. A good balance of maintaining git best practise without overcomplicating for the scale we need. &lt;/p&gt;

</description>
      <category>shopify</category>
      <category>git</category>
      <category>github</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Google Sheets x BigQuery Sync using Google Apps Script</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Mon, 13 Oct 2025 06:33:45 +0000</pubDate>
      <link>https://forem.com/charlottetowell/google-sheets-x-bigquery-sync-using-google-apps-script-1d2a</link>
      <guid>https://forem.com/charlottetowell/google-sheets-x-bigquery-sync-using-google-apps-script-1d2a</guid>
      <description>&lt;p&gt;If you haven't discovered Google Apps Script yet (ahem, me a week ago) then you are seriously missing out!&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;free&lt;/strong&gt; service included in the Google Suite can be extremely powerful when wielded with the right knowledge. I recommend checking out Benson King'ori's &lt;a href="https://dev.to/virgoalpha/mastering-google-apps-script-free-automation-in-google-workspace-3g1e"&gt;dev.to article&lt;/a&gt; for a super in-depth explanation &amp;amp; intro of apps script.&lt;/p&gt;

&lt;p&gt;Here's how I used it to set up a user-trigger Google Sheets -&amp;gt; BigQuery sync&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tldr;&lt;/strong&gt; save a JSON service account key in PropertiesService to access any GCP API. You're welcome :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisities
&lt;/h2&gt;

&lt;p&gt;You'll need to have a Google Cloud Project &amp;amp; a Google Workspace to work in.&lt;/p&gt;

&lt;p&gt;Optional, but I'd also suggest installing &lt;a href="https://developers.google.com/apps-script/guides/clasp" rel="noopener noreferrer"&gt;clasp&lt;/a&gt; - a CLI for working with apps script locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credentials
&lt;/h2&gt;

&lt;p&gt;Perhaps the most obvious hurdle to tackle here is how to ensure secure authentication with our GCP project when initiating a sync from a user in Google Sheets. This is where we'll use a json service account key and the script's PropertiesService to save it.&lt;/p&gt;

&lt;p&gt;This is the terraform snippet I used to create my service account in GCP, granting only the necessary permissions for BigQuery sync:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_service_account" "google_apps_script_sa" {
  account_id   = "google-apps-script-sa"
  display_name = "Google Apps Script Service Account"
  description  = "Service account for Google Apps Script"
}

resource "google_project_iam_member" "act_as_google_apps_script_sa" {
  for_each = toset([
    "roles/bigquery.dataEditor",
    "roles/bigquery.jobUser",
    "roles/bigquery.readSessionUser",
    "roles/bigquery.user"
  ])

  project = var.project_id
  role    = each.value
  member  = "serviceAccount:${google_service_account.google_apps_script_sa.email}"

  depends_on = [google_service_account.google_apps_script_sa]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once created, you an generate a JSON service account key and then store it (joined as a single line) in the PropertiesService of the script.&lt;/p&gt;

&lt;h2&gt;
  
  
  User Interaction
&lt;/h2&gt;

&lt;p&gt;I created my script as "Bound" to a certain Google Sheets file, and within it, I wanted the user to select which Sheet to sync. Therefore, we can create a &lt;code&gt;.html&lt;/code&gt; file in our project to define the popup dialog, hooking up a 'Sync' button to a &lt;code&gt;google.script.run&lt;/code&gt; block as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$('submitBtn').addEventListener('click', function (e) {
  // ...

    google.script.run
      .syncToBigQuery(...);
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2amwe98yef6updk0le89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2amwe98yef6updk0le89.png" alt="Screenshot of Dialog" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then include this dialog as a custom menu option in Sheets by defining a &lt;code&gt;createMenu&lt;/code&gt; item that shows the popup as a modal dialog.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Menu.js
function onOpen() {
  SpreadsheetApp.getUi()
    .createMenu('Data Tools')
    .addItem('Sync to BigQuery', 'showSheetSelector')
    .addToUi();
}
=
function showSheetSelector() {
  const t = HtmlService.createTemplateFromFile('Dialog');

  const html = t.evaluate()
    .setWidth(420)
    .setHeight(260);

  SpreadsheetApp.getUi().showModalDialog(html, 'Sync Sheet to BigQuery');
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;Now for where the real magic happens, to make use of our service account key, we use the following function to mint an oauth2 token that will allow us to interact with BigQuery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function getServiceAccountToken_(scopes) {
  const raw = PropertiesService.getScriptProperties().getProperty('SA_KEY_JSON');
  if (!raw) throw new Error('Missing SA_KEY_JSON in Script Properties');
  const key = JSON.parse(raw);

  const service = OAuth2.createService('sa-jwt')
    .setTokenUrl('https://oauth2.googleapis.com/token')
    .setPrivateKey(key.private_key)
    .setIssuer(key.client_email)
    .setPropertyStore(PropertiesService.getScriptProperties())
    .setScope(scopes.join(' '));

  const accessToken = service.getAccessToken();
  if (!accessToken) throw new Error('Failed to obtain access token: ' + service.getLastError());
  return { accessToken };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using it is as simple as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; const { accessToken } = getServiceAccountToken_([
    'https://www.googleapis.com/auth/bigquery'
  ]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The possibilities are endless
&lt;/h2&gt;

&lt;p&gt;It's clear how easy this is to extend to any other GCP service! All you have to do now is cross-reference the &lt;a href="https://cloud.google.com/bigquery/docs/reference/rest" rel="noopener noreferrer"&gt;docs&lt;/a&gt; and form your Rest API calls to BigQuery's API. The only limitation to what you can do here is your imagination...&lt;/p&gt;

&lt;p&gt;...and of course:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API's available to us for GCP Services&lt;/li&gt;
&lt;li&gt;And the associated runtime limits with Google Apps Script&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nonetheless, this proved a simple &amp;amp; highly suitable method to my use case to allow non-technical users who occasionally update a lookup spreadsheet to keep data in sync with our BigQuery warehouse. A full-fledged scheduled sync would've been overkill for a job so small and infrequent. Having a custom menu in Google Sheets was the perfect solution of meeting the user where they're at.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>bigquery</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Using GCP Load Balancer to Handle 301 Redirects to Other Domains</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 29 Jul 2025 00:36:56 +0000</pubDate>
      <link>https://forem.com/charlottetowell/using-gcp-load-balancer-to-handle-301-redirects-to-other-domains-3go6</link>
      <guid>https://forem.com/charlottetowell/using-gcp-load-balancer-to-handle-301-redirects-to-other-domains-3go6</guid>
      <description>&lt;p&gt;Some of the main use cases of using a load balancer include setting up a static IP for use with a custom domain, or by dispersing traffic across multiple backend services.&lt;/p&gt;

&lt;p&gt;A possibly less common use-case however is to use the balancer to handle 301 redirects in cases where your web application has undergone a domain migration. Here's how I set this up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6uo1bvruywj4v8nxeg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6uo1bvruywj4v8nxeg7.png" alt="Domain Redirection using Load Balancer" width="709" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure DNS Zones &amp;gt; Recordsets
&lt;/h3&gt;

&lt;p&gt;In the case of a domain migration, you must route the old domain(s) to a load balancer. I choose to use the same load balancer as used for my backend service to centralise the logic, else you may need a 'dummy' backend that is not reached. &lt;/p&gt;

&lt;h3&gt;
  
  
  Routing Rules
&lt;/h3&gt;

&lt;p&gt;Assuming you already have an application load balancer set up in Google (if not, see &lt;a href="https://dev.to/charlottetowell/custom-domain-for-web-apps-on-cloud-run-how-to-set-up-application-load-balancer-on-gcp-4kn0"&gt;here&lt;/a&gt;), we can jump straight into editing the routing rules of the load balancer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1omj0et49b0whmgt8u6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1omj0et49b0whmgt8u6.png" alt=" " width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep your existing routing rules, including default, to route to your desired backend(s). We will be clicking &lt;code&gt;Add host and path rule&lt;/code&gt;, specifically for the old domain.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;Hosts&lt;/code&gt;, add in your old domain(s). &lt;br&gt;
For path matcher, you can copy &amp;amp; paste the following template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;defaultService: projects/&amp;lt;project-id&amp;gt;/global/backendServices/&amp;lt;backend-service-id&amp;gt;
name: matcher1
routeRules:
- urlRedirect:
    httpsRedirect: true
    stripQuery: true
    hostRedirect: newdomain.com
    redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
  matchRules:
  - prefixMatch: /auth/ //only do for oldomain/auth/*
  priority: 1
- urlRedirect:
    httpsRedirect: true
    stripQuery: true
    hostRedirect: newdomain.com/login
    redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
  matchRules:
  - prefixMatch: /util/login //only do for oldomain/util/login/
  priority: 2
- urlRedirect:
    httpsRedirect: true
    stripQuery: true
    hostRedirect: newdomain.com
    redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
  matchRules:
  - prefixMatch: /util/ //only do for olddomain/util/*
  priority: 3
- urlRedirect:
    httpsRedirect: true
    stripQuery: true
    hostRedirect: anotherdomain.com
    redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
  matchRules:
  - prefixMatch: / //everything 
  priority: 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important Points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each rule (path matcher) must have a unique priority. Rules compute in cascading order, therefore you must put more specific matches first.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;MOVED_PERMANENTLY_DEFAULT&lt;/code&gt; code is what sets our 301 response&lt;/li&gt;
&lt;li&gt;You can redirect to other domains &amp;amp; websites that are not the load balancer backend service, eg &lt;code&gt;otherdomain.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After the redirect to &lt;code&gt;newdomain&lt;/code&gt;, it will then use your existing routing rules (if existing) to do any further routing before reaching the backend service(s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, you can save &amp;amp; test and should now successfully see the load balancer redirecting traffic from your old domain to new one based on whichever rules you have set!&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Custom Domain for Web Apps on Cloud Run - How to Set Up Application Load Balancer on GCP</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 29 Jul 2025 00:09:57 +0000</pubDate>
      <link>https://forem.com/charlottetowell/custom-domain-for-web-apps-on-cloud-run-how-to-set-up-application-load-balancer-on-gcp-4kn0</link>
      <guid>https://forem.com/charlottetowell/custom-domain-for-web-apps-on-cloud-run-how-to-set-up-application-load-balancer-on-gcp-4kn0</guid>
      <description>&lt;p&gt;This guide will walk through the steps of setting up a load balancer for Cloud Run instances in GCP. As well as the other features a load balancer offers, primarily our outcome will be a static &lt;strong&gt;outbound&lt;/strong&gt; IP which we can then use in a DNS service of choice to configure a custom domain.&lt;/p&gt;

&lt;p&gt;Some other articles of mine that may be of interest:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/charlottetowell/how-to-set-up-a-static-backend-ip-for-cloud-run-revision-using-vpc-connector-104g"&gt;How to Set Up a Static Backend IP for Cloud Run Revision using VPC Connector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/charlottetowell/using-gcp-load-balancer-to-handle-301-redirects-to-other-domains-3go6"&gt;Using GCP Load Balancer to Handle 301 Redirects to Other Domains&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisities
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You should already have a Cloud Run instance deployed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating a Load Balanacer
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This will roughly follow the guide as per &lt;a href="https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless" rel="noopener noreferrer"&gt;Set up a classic Application Load Balancer&lt;/a&gt; in the docs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create a new load balancer, choosing the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application Load Balanacer (HTTP/HTTPS)&lt;/li&gt;
&lt;li&gt;Public facing (external)&lt;/li&gt;
&lt;li&gt;Best for global workloads&lt;/li&gt;
&lt;li&gt;Global external Application Load Balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y8u8p0pl5i8t1xh1sct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y8u8p0pl5i8t1xh1sct.png" alt="Create Load Balancer" width="749" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Frontend Configuration&lt;br&gt;
First, we need to reserve a static IP by clicking "Create IP address"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd10kw3q9dpfe2gk03rp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd10kw3q9dpfe2gk03rp.png" alt="Create IP Address" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep the IP version as IPv4, keep the port as 443 (important for Cloud run!), and be sure to choose an active certificate. We will keep the "additional certificates" as GCP default. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Backend Configuration&lt;br&gt;
When creating the backend configuration, set the backend type as "Serverless network endpoint group". This then gives us the option to &lt;code&gt;Add a backend&lt;/code&gt; and &lt;code&gt;Create a new Serverless network endpoint group&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf25xg4ckqcfgrsyn9h8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf25xg4ckqcfgrsyn9h8.png" alt="Create Backend Configuration" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;Create Serverless network endpoint group&lt;/code&gt;, this is where you will select your Cloud Run instance which will receive all the traffic from the load balancer&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuga8wvxm5ehiv1ll0afq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuga8wvxm5ehiv1ll0afq.png" alt="Create Serverless Network Endpoint Group" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you intend to use your load balancer to disperse traffic over multiple backend services, this is where you can add these and later set routing rules to control which requests go to each.&lt;/p&gt;

&lt;p&gt;In the meantime, we can leave everything else as the default options. Note that another thing you can do at this point is create a custom &lt;a href="https://cloud.google.com/armor/docs/configure-security-policies" rel="noopener noreferrer"&gt;Cloud Armour policy&lt;/a&gt; for purposes such as IP whitelisting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Routing Rules&lt;br&gt;
If you are only using the load balancer for a custom domain, then we can leave Routing rules as &lt;code&gt;Simple host and path rule&lt;/code&gt;. If you set up multiple backend services in the previous step, this is where you can configure the routing for these. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Review &amp;amp; Finalise&lt;br&gt;
Finally, we review &amp;amp; finalise, and your options should look similar to mine in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5kg70jwsd6wgscop4xn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5kg70jwsd6wgscop4xn.png" alt="Review &amp;amp; Finalise" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; PubliC IP&lt;br&gt;
Once the load balancer creation finished loading, you can then click on it's details to reveal the public IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w4i41rt7u60pbr1r77f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w4i41rt7u60pbr1r77f.png" alt="Public IP" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;br&gt;
You can now use this IP in your DNS provider of choice to configure a custom domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  And done!
&lt;/h2&gt;

&lt;p&gt;Now all traffic to your custom domain will be received by the load balancer, and then routed accordingly as per the routing rules you set to your backend Cloud Run service(s)!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i8vziyor41qeys1gu2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i8vziyor41qeys1gu2t.png" alt="Application Load Balancer Cloud Run Diagram" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Image source: &lt;a href="https://cloud.google.com/static/load-balancing/images/lb-serverless-run-ext-https.svg" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>googlecloud</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>MySQL 5.7 - 8.0 on Google CloudSQL: Inplace Migration</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Wed, 16 Jul 2025 01:38:11 +0000</pubDate>
      <link>https://forem.com/charlottetowell/mysql-57-80-on-google-cloudsql-inplace-migration-3gc3</link>
      <guid>https://forem.com/charlottetowell/mysql-57-80-on-google-cloudsql-inplace-migration-3gc3</guid>
      <description>&lt;p&gt;As of &lt;a href="https://cloud.google.com/blog/products/databases/extended-support-for-end-of-life-cloud-sql-mysql-and-postgresql" rel="noopener noreferrer"&gt;May 2025&lt;/a&gt; - MySQL 5.7 is now in extended support for CloudSQL meaning your cloud bill is about to ⬆️⬆️⬆️&lt;/p&gt;

&lt;p&gt;This is how I upgraded our CloudSQL instance(s) from MySQL 5.7 - MySQL 8.0 with minimal downtime via an inplace migration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Always clone your instance and complete these steps fully before performing on your production instance&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Let's get started
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install MySQL Client - &lt;a href="https://dev.mysql.com/downloads/workbench/" rel="noopener noreferrer"&gt;Download Link for MySQL Workbench&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install MySQL Shell - &lt;a href="https://dev.mysql.com/downloads/file/?id=539630" rel="noopener noreferrer"&gt;Download Link&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember to add to your PATH variable so you can run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql --version
mysqlsh --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect to Instance From Cloud Shell
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/sql/docs/mysql/connect-instance-cloud-shell" rel="noopener noreferrer"&gt;🔗 Google Docs: Connect Instance Cloud Shell&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the root user password&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;gcloud sql users set-password root
--host=%
--instance=&amp;lt;instance_id&amp;gt;
--prompt-for-password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then connect to the instance to check it worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; gcloud sql connect &amp;lt;instance_id&amp;gt; --user=root
Allowlisting your IP for incoming connection for 5 minutes...done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also now run &lt;code&gt;status&lt;/code&gt; command to see the current version info&lt;/p&gt;

&lt;p&gt;Now connect to &lt;code&gt;mysqlsh&lt;/code&gt; for the purpose of saving password (otherwise you need to enter password many times when running the below utility checker script):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;mysqlsh root@&amp;lt;ip&amp;gt;
Please provide the password for 'root@&amp;lt;ip&amp;gt;': *********************
Save password for 'root@&amp;lt;ip&amp;gt;'? [Y]es/[N]o/Ne[v]er (default No): Y
MySQL Shell 9.3.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run the MySQL Upgrade Utility Checker
&lt;/h3&gt;

&lt;p&gt;MySQL provides an &lt;a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-upgrade.html" rel="noopener noreferrer"&gt;Upgrade Utility Checker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running as reccomended results in a timeout, instead use the following shell script which runs each check at a time, and saves to a file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Prompt for IP address
echo -n "Enter MySQL server IP: "
read ip

# Create save directory
save_path="$HOME/Desktop/mysql-upgrade-checks"
mkdir -p "$save_path"

echo -e "\n${CYAN}Fetching available checks from $ip...${NC}"

# Script to list checks
script_to_list_checks='var checks = util.checkForServerUpgrade(null, { list: true }); print(JSON.stringify(checks));'

# Get list of checks
raw_checks=$(mysqlsh --js --uri root@$ip --execute="$script_to_list_checks" 2&amp;gt;&amp;amp;1)

# Parse the check names from the output
# Extract check IDs from the "- checkName" lines in the "Included:" section
checks=$(echo "$raw_checks" | awk '
/^Included:$/ { in_included = 1; next }
/^Excluded:$/ { in_included = 0; next }
in_included &amp;amp;&amp;amp; /^- / { 
    # Extract the check name (first word after "- ")
    gsub(/^- /, "")
    print $1
}
')

if [ -z "$checks" ]; then
    echo -e "${RED}Error: Could not find any checks in the response.${NC}"
    echo "Raw output: $raw_checks"
    exit 1
fi

# Count checks
check_count=$(echo "$checks" | wc -l)
echo -e "${GREEN}Found $check_count checks.${NC}"

# Script template with placeholder
script_template='var result = util.checkForServerUpgrade(null, {
  targetVersion: "8.0",
  outputFormat: "JSON",
  include: ["__CHECK_ID__"]
});
print(JSON.stringify(result, null, 2));'

# Process each check
echo "$checks" | while read -r check_id; do
    if [ -n "$check_id" ]; then
        output_file="$save_path/$check_id.json"
        echo -e "${YELLOW}Running check: $check_id ...${NC}"

        # Replace placeholder with actual checkId
        script=$(echo "$script_template" | sed "s/__CHECK_ID__/$check_id/g")

        # Run the check and save output, filtering out "undefined"
        mysqlsh --js --uri root@$ip --execute="$script" 2&amp;gt;&amp;amp;1 | grep -v "^undefined$" &amp;gt; "$output_file"

        if [ -f "$output_file" ]; then
            echo -e "${GREEN}✔ Saved: $output_file${NC}"
        else
            echo -e "${RED}⚠ Failed to save output for $check_id${NC}"
        fi
    fi
done

echo -e "\n${CYAN}✅ All checks completed. Results saved to: $save_path${NC}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need the public IP address of the instance which you can find from Google Cloud Console:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmakrpm29t6wjqv22rliy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmakrpm29t6wjqv22rliy.png" alt="Instances Public IP Address" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, make sure you are IP whitelisted for your CloudSQL instance, otherwise connection will be lost part-way through the checks if &amp;gt; 5 minutes.&lt;/p&gt;
&lt;h3&gt;
  
  
  Review the Upgrade Utility Checker Results
&lt;/h3&gt;

&lt;p&gt;As per the above script, the results will be saved to JSON files in the specified location &lt;code&gt;$HOME/Desktop/mysql-upgrade-checks&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For this step internally, I created a python file to parse the JSON files into a condensed easy-to-read report. Below is a snippet of this outputted report, I highly suggest writing a script to produce a similar output for review:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=============
MySQL UPGRADE COMPATIBILITY CHECK REPORT
=============

SERVER INFORMATION
----------------------------------------
Server Address: &amp;lt;ip:
Current Version: 5.7.44-google-log - (Google)
Target Version: 8.0.42

EXECUTIVE SUMMARY
----------------------------------------
Total Files Processed: 37
Total Errors: 92
Total Warnings: 3093
Total Notices: 14259
Checks with Issues: 8
Checks OK: 23

❌ UPGRADE BLOCKED: Critical errors detected that must be resolved before upgrade.

CHECK SUMMARY ANALYSIS
=============================
🚨 CRITICAL ERRORS (Must Fix) (1 checks)
--------------------------------------------------
Check: MySQL syntax check for routine-like objects
  Source: syntax.json
  Affects 1 unique objects
  Affects 92 databases/schemas
  Total occurrences: 92
  Sample objects: example_proc_name (Routine)

⚠️  WARNINGS (Should Address) (5 checks)
--------------------------------------------------
...
ℹ️  NOTICES (Informational) (1 checks)
--------------------------------------------------
...
CHECK-BY-CHECK SUMMARY
============================
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review this report and fix any critical issues accordingly. This may involve running ALTER statements to fix certain columns, or cleaning up unused legacy databases. When fixed, re-run the utility checker until you confirm there are no upgrade conflicts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Take Backup of Instance
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: that this is extra, as CloudSQL also takes &lt;a href="https://cloud.google.com/sql/docs/mysql/upgrade-major-db-version-inplace#upgrade-backups" rel="noopener noreferrer"&gt;automatic backups&lt;/a&gt; both pre &amp;amp; post upgrade&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;gcloud sql backups create --async --instance=&amp;lt;instance_id&amp;gt; --description=pre-mysql8-upgrade&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Manually created backups are not deleted automatically, unless that instance is deleted. &lt;a href="https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#on-demand-backups" rel="noopener noreferrer"&gt;Google Docs Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the backup to confirm it was successfully made:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;gcloud sql backups list --instance &amp;lt;instance_id&amp;gt;
ID             WINDOW_START_TIME              ERROR  STATUS      INSTANCE
&amp;lt;backup_id&amp;gt;  2025-07-04T06:25:41.618+00:00  -      SUCCESSFUL  &amp;lt;instance_id&amp;gt;
&amp;lt;backup_id&amp;gt;  2025-07-04T04:05:16.766+00:00  -      SUCCESSFUL  &amp;lt;instance_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or view more detail about specific backup like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;gcloud sql backups describe &amp;lt;backup_id&amp;gt; --instance &amp;lt;instance_id&amp;gt;
backupKind: SNAPSHOT
databaseVersion: MYSQL_5_7
description: pre-mysql8-upgrade
endTime: '2025-07-04T06:27:12.911Z'
enqueuedTime: '2025-07-04T06:25:41.618Z'
id: '&amp;lt;backup_id&amp;gt;'
instance: &amp;lt;instance_id&amp;gt;
kind: sql#backupRun
location: &amp;lt;location&amp;gt;
maxChargeableBytes: '&amp;lt;bytes&amp;gt;'
selfLink: https://sqladmin.googleapis.com/sql/v1beta4/projects/&amp;lt;project_id&amp;gt;/instances/&amp;lt;instance_id&amp;gt;/backupRuns/&amp;lt;backup_id&amp;gt;
startTime: '2025-07-04T06:25:41.627Z'
status: SUCCESSFUL
type: ON_DEMAND
windowStartTime: '2025-07-04T06:25:41.618Z'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run Upgrade (Will cause some downtime!)
&lt;/h3&gt;

&lt;p&gt;Get the database version for upgrade by running:&lt;br&gt;
&lt;code&gt;gcloud sql instances describe &amp;lt;instance_id&amp;gt; --format="table(upgradableDatabaseVersions)"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Choose the database version that you ran the MySQL utility checker against. You can see it at the top of the summary reports.&lt;/p&gt;

&lt;p&gt;MYSQL_8_0_42&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gcloud sql instances patch &amp;lt;instance_id&amp;gt; --database-version=&amp;lt;DATABASE_VERSION&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Depending on the size of your instance, this make take some time, up to 1 hour. The database will only be offline during a portion of this window.&lt;/p&gt;

&lt;p&gt;You may get a timeout error which is fine, just run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: (gcloud.sql.instances.patch) Operation https://sqladmin.googleapis.com/sql/v1beta4/projects/&amp;lt;project_id&amp;gt;/operations/&amp;lt;operation_id&amp;gt; is taking longer than expected. You can continue waiting for the operation by running `gcloud beta sql operations wait --project &amp;lt;project_id&amp;gt; &amp;lt;operation_id&amp;gt;`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List operations against the instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;gcloud sql operations list --instance=&amp;lt;instance_id&amp;gt; --filter=STATUS=RUNNING
NAME            TYPE    START                          END  ERROR  STATUS
&amp;lt;operation_id&amp;gt;  UPDATE  2025-07-04T06:41:23.812+00:00   T     -    RUNNING
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the operation ID to monitor the status&lt;/p&gt;

&lt;h3&gt;
  
  
  In the event of errors! Revert to backup
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;gcloud sql backups restore &amp;lt;backup_id&amp;gt; --resotre-instance=&amp;lt;instance_id&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Complete the upgrade
&lt;/h3&gt;

&lt;p&gt;See CloudSQL upgrade guide for recomendations about additional needed testing i.e. updating user privileges. This will differ based on your instance configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/sql/docs/mysql/upgrade-major-db-version-inplace#complete_the_major_version_upgrade" rel="noopener noreferrer"&gt;Google Docs Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main change needed will be to update your database user permissions. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;GRANT ALL PRIVILEGES&lt;/code&gt; no longer works, so instead do this, tweaking based on which permissions you wish to grant to the user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt;
    &lt;span class="k"&gt;ALTER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;CREATE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TEMPORARY&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;DELETE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;DROP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;EVENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;EXECUTE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;INDEX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;LOCK&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;PROCESS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;REFERENCES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;DATABASES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;TRIGGER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;UPDATE&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="nv"&gt;`user`&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;`%`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Finally, delete backups
&lt;/h3&gt;

&lt;p&gt;After a period of time, if all is well, you can delete the manually created backup via:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt;gcloud beta sql backups delete &amp;lt;backup_id&amp;gt; --instance=&amp;lt;instance_id&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also use the same method to delete the CloudSQL automatic backups, else they will persistent indefinitely.&lt;/p&gt;

&lt;h1&gt;
  
  
  Upgrade Complete!
&lt;/h1&gt;

&lt;p&gt;Just like that, no more extended support bills to pay to Google Cloud :)&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>googlecloud</category>
      <category>database</category>
      <category>devops</category>
    </item>
    <item>
      <title>Customising your VS Code Integrated Terminals</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 08 Jul 2025 06:04:29 +0000</pubDate>
      <link>https://forem.com/charlottetowell/customising-your-vs-code-integrated-terminals-5bo8</link>
      <guid>https://forem.com/charlottetowell/customising-your-vs-code-integrated-terminals-5bo8</guid>
      <description>&lt;p&gt;Ever wanted to create custom VS Code terminals? Me too!&lt;/p&gt;

&lt;h3&gt;
  
  
  If you're looking for a how-to this will show you how to:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create custom integrated terminals&lt;/li&gt;
&lt;li&gt;Customise the terminal colour, icon, and display name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get into it!&lt;/p&gt;

&lt;h1&gt;
  
  
  Terminal Integrated Profiles
&lt;/h1&gt;

&lt;p&gt;First things first, let's go into our integrated terminal settings. &lt;br&gt;
Use &lt;code&gt;ctrl+shift+p&lt;/code&gt; then open VS Code settings. Search for "Terminal integrated profiles windows" and then click "Edit in &lt;code&gt;settings.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn1oxh4t9pc21chrk88o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn1oxh4t9pc21chrk88o.png" alt="VS Code Terminal Settings" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  The all-powerful &lt;code&gt;settings.json&lt;/code&gt;
&lt;/h1&gt;

&lt;p&gt;This is where all the magic happens!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    ....
    "terminal.integrated.profiles.windows": {
        "PowerShell": {
            "source": "PowerShell",
            "icon": "terminal-powershell",
            "color": "terminal.ansiBlue"
        },
        "Command Prompt": {
            "path": [
                "${env:windir}\\Sysnative\\cmd.exe",
                "${env:windir}\\System32\\cmd.exe"
            ],
            "args": [],
            "icon": "terminal-cmd"
        },
        "Git Bash": {
            "path": "C:\\Program Files\\Git\\bin\\bash.exe",
            "args": [],
            "icon": "github",
            "color": "terminal.ansiMagenta"
        },
        "Google Cloud SDK": {
            "path": "C:\\Windows\\System32\\cmd.exe",
            "args": ["/k", "C:\\Users\\&amp;lt;User&amp;gt;\\AppData\\Local\\Google\\Cloud SDK\\cloud_env.bat"],
            "icon": "cloud",
            "overrideName": "Google Cloud SDK",
            "color": "terminal.ansiYellow"
        }
    },
    "terminal.integrated.defaultProfile.windows": "Git Bash"
    ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will something like the above, specifically looking for the &lt;code&gt;terminal.integrated.profiles.windows&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;Each object here will be available as a terminal option when creating a new terminal, so you can directly add in as many things as you want here!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc9ne82xgkcgy4u1fb71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuc9ne82xgkcgy4u1fb71.png" alt="Terminal Options" width="363" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Terminal Paths vs. PATH env variable
&lt;/h1&gt;

&lt;p&gt;At this point you may be wondering about my custom "Google Cloud SDK" terminal...&lt;/p&gt;

&lt;p&gt;By directly creating a terminal to the &lt;code&gt;.bat&lt;/code&gt; file, you get an alternative for a easy-to-open dedicated path to run &lt;code&gt;gcloud&lt;/code&gt; commands against, instead of making it globally available by adding to your system path (which of course you could still do as well~)&lt;/p&gt;

&lt;h1&gt;
  
  
  Customisation Settings
&lt;/h1&gt;

&lt;p&gt;Other than the PATH itself, the real fun stuff is in customising how your terminal looks. You can see docs for the full available options &lt;a href="https://code.visualstudio.com/docs/terminal/profiles#_configuring-profiles" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Google Cloud SDK": {
            "icon": "cloud",
            "overrideName": "Google Cloud SDK",
            "color": "terminal.ansiYellow"
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Colours
&lt;/h3&gt;

&lt;p&gt;To change the colours, just start typing as VS Code has intellisense for the available colour options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9asn4zoqhkqeuhjhqc48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9asn4zoqhkqeuhjhqc48.png" alt="Colour Options" width="797" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Name
&lt;/h3&gt;

&lt;p&gt;Setting the &lt;code&gt;overrideName&lt;/code&gt; attribute lets you customise the displayed name of your integrated terminal in the UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Icon
&lt;/h3&gt;

&lt;p&gt;Choosing the icon is perhaps the most fun part! I struggled to find a complete list of all available icons, so the best way to browse is by visiting any other "Icon" related setting in the UI, and looking through the dropdown of options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadtsxcyzl5s1iijl6dmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadtsxcyzl5s1iijl6dmn.png" alt="Available Icons" width="502" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  All Together
&lt;/h1&gt;

&lt;p&gt;We have a much more bright &amp;amp; colourful terminal experience! And if you're like me, much less error prone by clicking on the wrong one by accident.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo98mznomjq7cw4rwtct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo98mznomjq7cw4rwtct.png" alt="All Together" width="406" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My favourite use case of this is to have multiple of the same terminal "type" but styled different, so in my day-to-day workflows I keep certain tasks separated in each and it's easy to distinguish them.&lt;/p&gt;

&lt;p&gt;Happy VS Code customising!&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>cli</category>
      <category>themes</category>
    </item>
    <item>
      <title>How to Set Up a Static Backend IP for Cloud Run Revision using VPC Connector</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 01 Jul 2025 06:46:43 +0000</pubDate>
      <link>https://forem.com/charlottetowell/how-to-set-up-a-static-backend-ip-for-cloud-run-revision-using-vpc-connector-104g</link>
      <guid>https://forem.com/charlottetowell/how-to-set-up-a-static-backend-ip-for-cloud-run-revision-using-vpc-connector-104g</guid>
      <description>&lt;p&gt;When deploying services on Cloud Run, the default behaviour is that the backend IP address (that is, where requests to external endpoints come from within your app), is assigned from a dynamic IP address pool.&lt;/p&gt;

&lt;p&gt;Therefore, for cases that require IP whitelisting, you need to configure the Cloud Run instance to use a static backend IP, which can be achieved through the &lt;em&gt;magic&lt;/em&gt;✨ (read: networking capabilities) of VPC Connector.&lt;/p&gt;

&lt;p&gt;Note that we are referring to the &lt;strong&gt;outbound&lt;/strong&gt; IP here, not the &lt;strong&gt;inbound&lt;/strong&gt; IP which instead is how traffic gets &lt;strong&gt;to&lt;/strong&gt; our Cloud Run instance and can be configured via a load balancer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out the Google Cloud docs &lt;a href="https://cloud.google.com/run/docs/configuring/static-outbound-ip" rel="noopener noreferrer"&gt;here&lt;/a&gt; for static outbound IP addresses&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbyf1sv6ztjxg1i9mngd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbyf1sv6ztjxg1i9mngd5.png" alt="CloudArchitectureDiagramWithVPCConnector" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Configure a Static Outbound IP?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Router
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;gcloud compute routers create my-router --network=default --region=my-region&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating router [my-router]...done.
NAME                    REGION                NETWORK
my-router  my-region  default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Reserve a Static IP
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;gcloud compute addresses create my-ip --region=my-region&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/my-region/addresses/my-ip].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Optional Step: View Existing Subnets
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;gcloud compute networks subnets list --network=default --filter="region:(my-region)"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME           REGION                NETWORK  RANGE          STACK_TYPE  IPV6_ACCESS_TYPE  INTERNAL_IPV6_PREFIX  EXTERNAL_IPV6_PREFIX
default        my-region  default  0.0.0.0/00  IPV4_ONLY
my-other-subnet my-region  default  0.0.0.0/00  IPV4_ONLY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In reality, your existing subnets will have actual IP ranges. Take note of this when choosing your new range so it is not equal to an existing one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create a new Subnet
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;gcloud compute networks subnets create my-subnet --netwo&lt;br&gt;
rk=default --range=00.0.0.0/01--region=my-region&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/my-region/subnetworks/my-subnet].
NAME                    REGION                NETWORK  RANGE        STACK_TYPE  IPV6_ACCESS_TYPE  INTERNAL_IPV6_PREFIX  EXTERNAL_IPV6_PREFIX
my-subnet  my-region  default  10.0.0.0/24  IPV4_ONLY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create a Cloud NAT Gateway
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute routers nats create my-nat \
--router=my-router \
--region=my-region \
--nat-custom-subnet-ip-ranges=my-subnet \
--nat-external-ip-pool=my-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the names you configured in the previous steps here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating NAT [my-nat] in router [my-router]...done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Set the Networking on your Cloud Run Revision
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70c07i8j4oibbf9m1w8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70c07i8j4oibbf9m1w8b.png" alt="CloudRunNetworkingConfiguration" width="800" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important - If it's not working, confirm that it is set to route &lt;strong&gt;all&lt;/strong&gt; traffic to the VPC, not just route only requests to &lt;strong&gt;private&lt;/strong&gt; IPs to the VPC -- use case for private traffic is between google services eg. static IP for Cloud SQL in API endpoint cloud run revisions&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 6: See the Static Outbound IP from Cloud NAT
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr971onu6vj49xkfegm0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr971onu6vj49xkfegm0b.png" alt="CloudNATIP" width="800" height="673"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;And all done! To test all is working as intended, you can make an API request to services such as &lt;code&gt;GET https://api.ipify.org?format=json&lt;/code&gt; from &lt;strong&gt;within&lt;/strong&gt; your Cloud Run application.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>networking</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Using Pub/Sub to Build a Serverless Async Processing Pipeline on GCP</title>
      <dc:creator>Charlotte Towell</dc:creator>
      <pubDate>Tue, 20 May 2025 02:04:01 +0000</pubDate>
      <link>https://forem.com/charlottetowell/using-pubsub-to-build-a-serverless-async-processing-pipeline-on-gcp-406p</link>
      <guid>https://forem.com/charlottetowell/using-pubsub-to-build-a-serverless-async-processing-pipeline-on-gcp-406p</guid>
      <description>&lt;p&gt;Google Cloud Run functions seem like the godsend of serverless computing until you hit reach the many different limitations all ultimately related to runtime. I'm talking about the max timeout in functions themselves (&lt;a href="https://cloud.google.com/functions/docs/configuring/timeout" rel="noopener noreferrer"&gt;3600s&lt;/a&gt;), or other products such as API Gateway (&lt;a href="https://cloud.google.com/endpoints/docs/openapi/openapi-extensions#deadline" rel="noopener noreferrer"&gt;300s&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;This is the problem I faced at work where we had a long-running computation process that fires off as a result of an API call from a user. Sure, we could set the max deadline to 1 hour and have the user wait for 60 mins just to get a response, &lt;em&gt;not&lt;/em&gt;. Plus, as I mentioned, API Gateway caps it at 5 minutes anyway.&lt;/p&gt;

&lt;p&gt;This is an obvious use case for asynchronous processing with the ability to start a process, return a response, and keep it running...&lt;/p&gt;

&lt;p&gt;... which is something not supported inherently in a serverless function which ceases running once returning a response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pub/Sub to the rescue
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fvgatw4v073d166adkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fvgatw4v073d166adkq.png" alt="Pubsub Superhero Meme" width="800" height="427"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;My solution: pub/sub messaging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Contrainsted to our serverless setup, one thing I've learnt about Google Cloud is that the limitations of one GCP product generally can be solved by combining with another.&lt;/p&gt;

&lt;p&gt;Thus, my multi-step solution to a fully serverless, async processing pipeline with status updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06lavk4q1n4jhieg78db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06lavk4q1n4jhieg78db.png" alt="Async Process Flow Diagram" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow is as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Client calls an API to start the job&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This API generates a unique ID&lt;/li&gt;
&lt;li&gt;Pubsub message is published with this unique ID, and any other attributes&lt;/li&gt;
&lt;li&gt;Return the unique ID to the client
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def publish_to_pubsub(data, attributes=None):
    publisher = pubsub_v1.PublisherClient()
    topic_name = 'projects/myproject/topics/async-messages'
    #convert to bytes
    data_bytes = json.dumps(data).encode('utf-8')
    #publish to topic
    future = publisher.publish(topic_name, data=data_bytes, **attributes)
    future.result() #wait for msg to be published
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;2. The pubsub topic has push subscribers, optionally with attribute filters. I use this to re-use the same topic for different async processes.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Receive the message and a subscriber cloud function runs&lt;/li&gt;
&lt;li&gt;Check if the long-running process has started based on provided ID&lt;/li&gt;
&lt;li&gt;If not started, make a HTTPS request to start the long-running cloud function, but don't wait for a resposne&lt;/li&gt;
&lt;li&gt;We update the database with our ID to mark the function has started&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;gcloud pubsub subscriptions create EXAMPLE_WORKFLOW --topic=async-messages --push-endpoint=https://example-workflow-cloud-function.a.run.app --ack-deadline=10 --push-auth-service-account=pubsub-push-sa@myproject.iam.gserviceaccount.com --message-filter="attributes.workflow=\"Example Workflow\"" --dead-letter-topic=dead-letter-topic --max-delivery-attempts=5&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from multiprocessing import Process

def send_request(url, body):
    authenticated_post_request(url=url, json_payload=body)

process = Process(target=send_request, args=(url, body))
process.start()

process.join(5) #cancel process
if process.is_alive():
   process.terminate()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;At this moment, our long-running process has started, essentially on its own in the wide galactic space of Google's servers with no client to receive its response 🌌&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Long running function does our actual processing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is really just another cloud function now, but our only time constraint is the timeout of cloud functions themself as there is no client listening to a resposne&lt;/li&gt;
&lt;li&gt;If for whatever reason you need a longer timeout (maybe assess why you're using serverless 🤔) you could chain the process of starting and abandoning cloud functions, so long as you continue to pass the id and any requried data&lt;/li&gt;
&lt;li&gt;Periodically update the status of the process to provide incremental updates, via the database

&lt;ul&gt;
&lt;li&gt;At this point, you could build more robust re-try handling by adding deadlines for certain status changes &amp;amp; resumability&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. We have another API endpoint to GET the current status.&lt;/strong&gt;&lt;br&gt;
This is just a database query based on the ID we provided back to the client&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hsdgx27dx15oonoe9j7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hsdgx27dx15oonoe9j7.gif" alt="UI Job Status Check GIF" width="402" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Idempotency ?
&lt;/h3&gt;

&lt;p&gt;By using pub/sub, a key requirement is to ensure idempotency as messages are often sent multiple times. This is why the &lt;strong&gt;subcriber&lt;/strong&gt; function must update the database to indicate the job has started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retries ?
&lt;/h3&gt;

&lt;p&gt;Pub/sub also has a retry-backoff functionality. A downside of this set-up is that our long-running function does not benefit from this as it is only called once from the subscriber. However any logic we put into the subscriber function can throw an error to result in an unacked message.&lt;/p&gt;

&lt;h3&gt;
  
  
  So in summary,
&lt;/h3&gt;

&lt;p&gt;Could we not have just started the long-running function from the initial API call?&lt;/p&gt;

&lt;p&gt;Yes of course.&lt;/p&gt;

&lt;p&gt;The reason I opted to use pub-sub is primarily to allow multiple subscriber functions to fire from messages. By using &lt;a href="https://cloud.google.com/pubsub/docs/subscription-message-filter" rel="noopener noreferrer"&gt;subscription message filters&lt;/a&gt;, you can split what would have been a long-running process into several smaller ones to run in parallel.&lt;/p&gt;

&lt;p&gt;After all, with serverless, we want to scale horizontally first wherever possible :)&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>googlecloud</category>
      <category>cloud</category>
      <category>python</category>
    </item>
  </channel>
</rss>
