<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andrea Liliana Griffiths</title>
    <description>The latest articles on Forem by Andrea Liliana Griffiths (@andreagriffiths11).</description>
    <link>https://forem.com/andreagriffiths11</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/andreagriffiths11"/>
    <language>en</language>
    <item>
      <title>GitHub Actions: The Stuff Nobody Tells You</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Sun, 05 Apr 2026 12:39:48 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/github-actions-the-stuff-nobody-tells-you-19md</link>
      <guid>https://forem.com/andreagriffiths11/github-actions-the-stuff-nobody-tells-you-19md</guid>
      <description>&lt;h1&gt;
  
  
  GitHub Actions: The Stuff Nobody Tells You
&lt;/h1&gt;

&lt;p&gt;I work at GitHub. I use Actions every day. I've also debugged YAML at 2am, watched the log viewer eat my browser, and pushed a bunch of commits in a row just to figure out why a conditional wasn't evaluating correctly.&lt;/p&gt;

&lt;p&gt;I'm not here to tell you Actions is perfect. It's not. The log viewer has made grown engineers question their career choices. The YAML expression syntax has a learning curve that feels more like a learning cliff. The push-wait-fail-repeat debugging loop can turn a five-minute fix into an afternoon-long hostage situation.&lt;/p&gt;

&lt;p&gt;I know this because I've lived it. And I've watched thousands of developers live it too.&lt;/p&gt;

&lt;p&gt;But here's what I've also seen: most of the pain comes from patterns that are avoidable. Not all of it. Some of it is the platform catching up. But a lot of it is stuff that has solutions right now that people haven't discovered yet because the easy path is to just keep copy-pasting YAML and suffering.&lt;/p&gt;

&lt;p&gt;This article is the stuff I wish someone had told me on day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop Using the Log Viewer
&lt;/h2&gt;

&lt;p&gt;I'm serious. The web UI for reading build logs is the single biggest source of frustration with Actions, and the fastest fix is to stop using it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh run view &lt;span class="nt"&gt;--log-failed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Failed step, in your terminal, instantly. No clicking through three pages. No waiting for the browser to decide whether it wants to render today. No back button roulette.&lt;/p&gt;

&lt;p&gt;If you want the full log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh run view &lt;span class="nt"&gt;--log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to watch a run in real time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh run watch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CLI is faster, searchable with grep, and doesn't crash when your test suite outputs 50,000 lines. If you're still clicking through the web UI to read build logs in 2026, this is your sign to stop.&lt;/p&gt;

&lt;h2&gt;
  
  
  The YAML Problem Is Real. Here's How to Shrink It.
&lt;/h2&gt;

&lt;p&gt;Every CI system ends up as "a bunch of YAML." Actions is no exception. But there's a difference between a 40-line workflow that does one thing clearly and a 400-line monster with nested conditionals, matrix strategies, and inline bash scripts that would make a shell programmer cry.&lt;/p&gt;

&lt;p&gt;The 400-line monster happens because people don't know about the two features designed to prevent it. Or they don't use them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reusable Workflows
&lt;/h3&gt;

&lt;p&gt;If you have the same CI steps across multiple repos, you're probably copy-pasting workflows. Stop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/ci.yml&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-org/.github/.github/workflows/build.yml@main&lt;/span&gt;  &lt;span class="c1"&gt;# @main is fine for internal repos you control — but if you SHA-pin everything else, consider pinning this too&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;20'&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;inherit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One workflow, maintained in one place, called from everywhere. When you update it, every repo that uses it gets the update. No drift. No "wait, which repo has the latest version of our deploy script?"&lt;/p&gt;

&lt;h3&gt;
  
  
  Composite Actions
&lt;/h3&gt;

&lt;p&gt;Reusable workflows are for whole pipelines. Composite actions are for steps, the building blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/actions/setup-project/action.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Setup&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Project'&lt;/span&gt;
&lt;span class="na"&gt;runs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;using&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;composite'&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f&lt;/span&gt; &lt;span class="c1"&gt;# v6.3.0&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ inputs.node-version }}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;
      &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;
      &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your workflow files read like sentences, not telenovelas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd&lt;/span&gt; &lt;span class="c1"&gt;# v6.0.2&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./.github/actions/setup-project&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;20'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The YAML is still there. But it's 12 lines, not 120. And when setup changes, you change it in one place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Break the Push-Wait-Fail Loop
&lt;/h2&gt;

&lt;p&gt;The most soul-crushing part of Actions debugging: you make a one-character change to a workflow file, push it, wait four minutes for a runner to spin up, and find out you missed a quote. A bunch of commits later, your git history looks like a cry for help.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nektos/act" rel="noopener noreferrer"&gt;&lt;code&gt;act&lt;/code&gt;&lt;/a&gt;, a community-maintained tool, runs your workflows locally in Docker containers. Same environment, no push required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;act &lt;span class="nt"&gt;-j&lt;/span&gt; build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's not a perfect replica. Some GitHub-specific contexts don't exist locally. But for "did I break the YAML" and "does my bash script actually work," it cuts that feedback loop from minutes to seconds.&lt;/p&gt;

&lt;p&gt;For simple syntax validation before you even run anything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh workflow view ci.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the YAML has obvious issues, it'll surface them quickly. It's not a full linter, but it catches the basics without a push cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Marketplace Trust Problem (and What to Do About It)
&lt;/h2&gt;

&lt;p&gt;Every &lt;code&gt;uses: some-stranger/cool-action@v2&lt;/code&gt; is code you didn't write running with access to your repo and secrets. That's a real security concern, and "just pin to a SHA" is the right answer that nobody follows.&lt;/p&gt;

&lt;p&gt;Here's what actually works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pin to SHAs and let Dependabot manage updates:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd&lt;/span&gt; &lt;span class="c1"&gt;# v6.0.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the version in a comment so humans can read it. Dependabot will open PRs when new versions drop, and you can review the diff before updating. &lt;em&gt;(SHAs in this article were verified on March 30, 2026.)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Stick to GitHub-maintained actions when possible. &lt;code&gt;actions/checkout&lt;/code&gt;, &lt;code&gt;actions/setup-node&lt;/code&gt;, &lt;code&gt;actions/cache&lt;/code&gt; — these are maintained by the same team that builds the platform. They're audited, tested, and updated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For everything else, read the source. It's open source. If a marketplace action is a 20-line shell script wrapped in a Dockerfile, maybe just copy the 20 lines into a &lt;code&gt;run:&lt;/code&gt; step and own it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use OpenSSF Scorecard to assess action maintainer practices before adopting.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conditional Logic Without Losing Your Mind
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;${{ }}&lt;/code&gt; expression syntax is one of those things that's simple until it isn't, and then it's baffling. The edge cases around string interpolation, truthiness, and type coercion have bitten everyone at least once.&lt;/p&gt;

&lt;p&gt;A few survival rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Always quote expressions in `if:`&lt;/span&gt;
&lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event_name == 'push' }}&lt;/span&gt;

&lt;span class="c1"&gt;# Use fromJSON for booleans from inputs&lt;/span&gt;
&lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ fromJSON(inputs.deploy) == &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt; }}&lt;/span&gt;

&lt;span class="c1"&gt;# Multi-condition? Use &amp;gt;- to fold newlines into spaces.&lt;/span&gt;
&lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
  &lt;span class="s"&gt;${{ github.ref == 'refs/heads/main' &amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="s"&gt;github.event_name == 'push' }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if your conditional logic is getting complex enough to need a flowchart, that's a sign you need a &lt;a href="https://docs.github.com/en/actions/sharing-automations/reusing-workflows" rel="noopener noreferrer"&gt;reusable workflow with inputs&lt;/a&gt;, not more &lt;code&gt;if:&lt;/code&gt; statements.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.github.com/en/actions/reference/workflows-and-actions/expressions#case" rel="noopener noreferrer"&gt;&lt;code&gt;case()&lt;/code&gt; function&lt;/a&gt; is a recent addition worth knowing about. Think of it as a switch statement for expressions. It replaces nested ternaries that nobody can read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let Copilot Write the YAML
&lt;/h2&gt;

&lt;p&gt;I'm not going to pretend the YAML is fun to write. But in 2026, you don't have to write most of it.&lt;/p&gt;

&lt;p&gt;In VS Code with GitHub Copilot, describe what you want in a comment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Deploy to production on push to main, run tests first,&lt;/span&gt;
&lt;span class="c1"&gt;# cache node_modules, notify Slack on failure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copilot generates the workflow. You review and adjust. It handles the syntax, the &lt;code&gt;on:&lt;/code&gt; triggers, the &lt;code&gt;runs-on:&lt;/code&gt;, the step ordering, so you focus on what the pipeline should &lt;em&gt;do&lt;/em&gt;, not on remembering whether &lt;code&gt;environment&lt;/code&gt; goes inside or outside &lt;code&gt;jobs:&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For existing workflows, Copilot agent mode can refactor a 400-line YAML file into reusable workflows and composite actions. Tell it what you want, review the PR.&lt;/p&gt;

&lt;p&gt;This doesn't fix Actions. It fixes you having to wrestle with it directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching: The Free Speed You're Probably Not Using
&lt;/h2&gt;

&lt;p&gt;If your workflow installs dependencies on every run and you haven't set up caching, you're leaving minutes on the table for free.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f&lt;/span&gt; &lt;span class="c1"&gt;# v6.3.0&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;20'&lt;/span&gt;
    &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;npm'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TThat one line — &lt;code&gt;cache: 'npm'&lt;/code&gt; — tells &lt;code&gt;setup-node&lt;/code&gt; to cache your &lt;code&gt;node_modules&lt;/code&gt; between runs. First run is normal. Every run after that skips the full &lt;code&gt;npm ci&lt;/code&gt; download. For large projects, this cuts two to four minutes off every build.&lt;/p&gt;

&lt;p&gt;The same pattern works for other ecosystems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Python&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405&lt;/span&gt; &lt;span class="c1"&gt;# v6.2.0&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.12'&lt;/span&gt;
    &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pip'&lt;/span&gt;

&lt;span class="c1"&gt;# Go (manual cache)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7&lt;/span&gt; &lt;span class="c1"&gt;# v5.0.4&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/go/pkg/mod&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need more control, &lt;code&gt;actions/cache&lt;/code&gt; lets you cache anything — build artifacts, Docker layers, compiled binaries. The key is hashing your lockfile so the cache invalidates when dependencies actually change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop Running the Same Workflow Twice
&lt;/h2&gt;

&lt;p&gt;If your team pushes fast, you've probably seen this: three commits in ten minutes, three workflow runs queued, the first two are already stale by the time they finish.&lt;/p&gt;

&lt;p&gt;Concurrency groups fix this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;concurrency&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.workflow }}-${{ github.ref }}&lt;/span&gt;
  &lt;span class="na"&gt;cancel-in-progress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this at the top level of your workflow. Same branch, same workflow — the old run gets cancelled when a new one starts. No more wasting runner minutes on commits that are already superseded.&lt;/p&gt;

&lt;p&gt;This is especially useful for PR workflows where developers are iterating quickly. Without it, you're paying for builds nobody will look at.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually Getting Better
&lt;/h2&gt;

&lt;p&gt;I said I wouldn't sugarcoat this, so let me be specific about what's improved and what still needs work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Job summaries, structured output instead of just log lines&lt;/li&gt;
&lt;li&gt;Larger runners, 64-core machines available, ARM runners in GA&lt;/li&gt;
&lt;li&gt;Required workflow enforcement via rulesets, organization-wide policy without copy-paste&lt;/li&gt;
&lt;li&gt;Larger runners, 64-core machines available on paid plans (Team/Enterprise), ARM runners in GA for public repos&lt;/li&gt;
&lt;li&gt;Immutable actions, actions published to GHCR with provenance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;case()&lt;/code&gt; and recent expression improvements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Still needs work:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The log viewer. It's improved, but it's not where it needs to be for large builds. Use the CLI.&lt;/li&gt;
&lt;li&gt;Debugging experience. &lt;code&gt;act&lt;/code&gt; helps, but first-party local execution would change everything.&lt;/li&gt;
&lt;li&gt;The learning curve for expressions. The docs team has been steadily improving this, and it shows — but there's still room to grow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm not writing this to defend GitHub's honor. I'm writing it because I've watched too many teams suffer through problems that have solutions, and those solutions aren't reaching people fast enough.&lt;/p&gt;

&lt;p&gt;The fundamentals are there. The platform works. But "works" and "pleasant" are different things, and the gap between them is where your afternoon disappears. These patterns close that gap. Not all the way. But enough.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🆕 Update:&lt;/strong&gt; I turned the patterns from this article into a reusable Copilot skill — a markdown file your AI coding agent can reference when writing workflows. It covers SHA pinning, matrix strategies, artifacts, permissions, timeouts, and the filesystem gotchas that eat your afternoon. Still testing it, but you can try it now: &lt;a href="https://mainbranch.dev/skills/github-actions.md" rel="noopener noreferrer"&gt;GitHub Actions Skill&lt;/a&gt;&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>cicd</category>
      <category>developertools</category>
      <category>fundamentals</category>
    </item>
    <item>
      <title>GitHub Copilot CLI Enhanced with Chronicle</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Sat, 28 Mar 2026 03:08:26 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/github-copilot-cli-enhanced-with-chronicle-27fk</link>
      <guid>https://forem.com/andreagriffiths11/github-copilot-cli-enhanced-with-chronicle-27fk</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;https://github.com/features/copilot&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cli</category>
      <category>github</category>
      <category>productivity</category>
    </item>
    <item>
      <title>OpenClaw Just Passed React. Here's What the GitHub Star Leaderboard Actually Looks Like</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Thu, 26 Mar 2026 22:13:47 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-1118</link>
      <guid>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-1118</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g"&gt;https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OpenClaw Just Passed React. GitHub Star Leaderboard Update.</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Thu, 26 Mar 2026 22:09:06 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/openclaw-just-passed-react-github-star-leaderboard-update-1k1g</link>
      <guid>https://forem.com/andreagriffiths11/openclaw-just-passed-react-github-star-leaderboard-update-1k1g</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g"&gt;https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>news</category>
      <category>openclaw</category>
      <category>react</category>
    </item>
    <item>
      <title>OpenClaw Just Passed React. Here's What the GitHub Star Leaderboard Actually Looks Like.</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Thu, 26 Mar 2026 22:07:24 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3p7c</link>
      <guid>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3p7c</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g"&gt;https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OpenClaw Just Passed React. Here's What the GitHub Star Leaderboard Actually Looks Like.</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Thu, 26 Mar 2026 22:01:44 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-4dam</link>
      <guid>https://forem.com/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-4dam</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g"&gt;https://dev.to/andreagriffiths11/openclaw-just-passed-react-heres-what-the-github-star-leaderboard-actually-looks-like-3d5g&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Rapper Tweeted About GitHub, So I Built Him a Learn-to-Code Repo from My Phone</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Wed, 25 Mar 2026 14:14:07 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/a-rapper-tweeted-about-github-so-i-built-him-a-learn-to-code-repo-from-my-phone-31d6</link>
      <guid>https://forem.com/andreagriffiths11/a-rapper-tweeted-about-github-so-i-built-him-a-learn-to-code-repo-from-my-phone-31d6</guid>
      <description>&lt;h1&gt;
  
  
  A Rapper Tweeted About GitHub, So I Built Him a Learn-to-Code Repo from My Phone
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://x.com/MeekMill/status/2036280155098751004" rel="noopener noreferrer"&gt;Meek Mill tweeted about GitHub.&lt;/a&gt; It was the middle of the night. I was in bed, half asleep, scrolling on my phone.&lt;/p&gt;

&lt;p&gt;So I messaged my AI agent from my phone, told it to build a learn-to-code repo for Meek Mill styled like album tracks, and it created the repo on GitHub and assigned the task to Copilot coding agent. Copilot wrote the content, structured the tracks, and opened a PR. I reviewed and merged from my phone.&lt;/p&gt;

&lt;p&gt;I never opened a terminal. Never opened an IDE.&lt;/p&gt;

&lt;p&gt;The repo: &lt;a href="https://github.com/AndreaGriffiths11/dreamchasers" rel="noopener noreferrer"&gt;dreamchasers&lt;/a&gt;. Six tracks plus a bonus, album format, each one teaching a real GitHub skill. Track 1 to first repo in 10 minutes. &lt;a href="https://x.com/acolombiadev/status/2036450986210841065" rel="noopener noreferrer"&gt;Here's the tweet.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A rapper with 10 million followers just put "GitHub" in front of people who have never heard of a pull request. Some kid scrolling past that tweet is going to think "wait, I can do that?" and the answer is yes.&lt;/p&gt;

&lt;p&gt;You don't need a CS degree. Don't need a bootcamp. Don't even need to leave your browser. Just &lt;a href="https://github.com/signup" rel="noopener noreferrer"&gt;a GitHub account&lt;/a&gt; and curiosity.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's New in Copilot CLI
&lt;/h2&gt;

&lt;p&gt;I wrote the &lt;a href="https://github.blog/ai-and-ml/github-copilot/github-copilot-cli-how-to-get-started/" rel="noopener noreferrer"&gt;official getting started guide&lt;/a&gt; back in October. A lot has changed since then.&lt;/p&gt;

&lt;h3&gt;
  
  
  It's Free
&lt;/h3&gt;

&lt;p&gt;Copilot CLI is included as a core feature of all GitHub Copilot plans: Free, Pro, Pro+, Business, and Enterprise. You get a free GitHub account, you get Copilot CLI. Each interaction uses premium requests from your plan's allowance (50/month on Free).&lt;/p&gt;

&lt;h3&gt;
  
  
  Three Modes: Ask, Plan, Autopilot
&lt;/h3&gt;

&lt;p&gt;Press Shift+Tab to cycle between modes:&lt;/p&gt;

&lt;p&gt;Ask/Execute mode is the default. You prompt, Copilot acts, you approve each step.&lt;/p&gt;

&lt;p&gt;Plan mode analyzes your request, asks clarifying questions, and builds a structured implementation plan before writing any code. Catches misunderstandings before they become bad commits.&lt;/p&gt;

&lt;p&gt;Autopilot mode carries the task forward without step-by-step approval. You outline the work, then let it run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Agnostic
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;/model&lt;/code&gt; to switch between Claude, GPT, Gemini, and more. Switch mid-session. Run &lt;code&gt;/fleet&lt;/code&gt; to execute across multiple models in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interactive and Programmatic
&lt;/h3&gt;

&lt;p&gt;Two interfaces:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactive:&lt;/strong&gt; Run &lt;code&gt;copilot&lt;/code&gt; and have a conversation. Use &lt;code&gt;/plan&lt;/code&gt; to outline work, &lt;code&gt;/fleet&lt;/code&gt; to parallelize, &lt;code&gt;/delegate&lt;/code&gt; to hand off, &lt;code&gt;/chronicle&lt;/code&gt; to review your session history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Programmatic:&lt;/strong&gt; Pass a prompt directly with &lt;code&gt;copilot -p "your task here"&lt;/code&gt;. It runs, completes the task, exits. Good for scripts and automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native GitHub Integration via MCP
&lt;/h3&gt;

&lt;p&gt;Copilot CLI works directly with your issues and pull requests through GitHub's native MCP server. Search issues, analyze labels, create branches, open PRs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Agents and Skills
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;AGENTS.md&lt;/code&gt; and Agent Skills to define custom instructions and tool access, so your agent works the same way every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Copilot Coding Agent
&lt;/h2&gt;

&lt;p&gt;The coding agent is what I actually used to build the dreamchasers repo from my phone. It works autonomously in a GitHub Actions environment: you assign it an issue or describe a task, it explores your repo, writes the code, runs your tests and linters, and opens a pull request. You review it like any other PR.&lt;/p&gt;

&lt;p&gt;Copilot coding agent is available with GitHub Copilot Pro, Pro+, Business, and Enterprise plans.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s5hvo32fcm2a6iwr1p0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s5hvo32fcm2a6iwr1p0.gif" alt="dreamchasers ASCII art" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AndreaGriffiths11/dreamchasers" rel="noopener noreferrer"&gt;dreamchasers&lt;/a&gt;. Six tracks plus a bonus, album format — from making your first repo to contributing to open source. Everything runs in the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to Try Copilot CLI?
&lt;/h2&gt;

&lt;p&gt;This is the tool that built the repo. It turns plain English into code, right from your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Copilot CLI&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @github/copilot

&lt;span class="c"&gt;# Launch it&lt;/span&gt;
copilot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Included in all Copilot plans, including Free. All you need is a GitHub account.&lt;/p&gt;

&lt;p&gt;If you want to go deeper, here's the &lt;a href="https://github.blog/ai-and-ml/github-copilot/github-copilot-cli-how-to-get-started/" rel="noopener noreferrer"&gt;official guide I wrote for the GitHub Blog&lt;/a&gt; and the &lt;a href="https://docs.github.com/en/copilot/concepts/agents/copilot-cli/about-copilot-cli" rel="noopener noreferrer"&gt;full documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;Meek Mill getting into GitHub matters because someone who grew up on Dreams and Nightmares might see this and realize the tools are free, the community is open, and nobody is checking credentials at the door.&lt;/p&gt;

&lt;p&gt;Dream chasers ship code. 🎤&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>copilotcli</category>
      <category>copilotcodingagent</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Quantum Computing: The Compute Power Behind 'Curing Cancer'</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:45:09 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/quantum-computing-the-compute-power-behind-curing-cancer-l87</link>
      <guid>https://forem.com/andreagriffiths11/quantum-computing-the-compute-power-behind-curing-cancer-l87</guid>
      <description>&lt;h1&gt;
  
  
  Quantum Computing: The Compute Power Behind "Curing Cancer"
&lt;/h1&gt;

&lt;p&gt;A few weeks ago, my boss Cassidy posted &lt;a href="https://youtu.be/FC7YGG0FzZ0" rel="noopener noreferrer"&gt;a video about her feelings on AI&lt;/a&gt;. She called it "An attempt at a balanced perspective on AI" and described the process as repeatedly "crashing out" while working through her thoughts.&lt;/p&gt;

&lt;p&gt;I watched it. Then I left a comment: "Let's get AI to cure cancer first, then throw it in the ocean."&lt;/p&gt;

&lt;p&gt;I'm a breast cancer survivor. That experience rewrites your priorities. When people ask what I want from AI, I don't say better autocomplete. I say cure cancer. Then throw it in the ocean. But cure cancer first.&lt;/p&gt;

&lt;p&gt;That comment stuck with me. I started wondering what it would actually take. Not the hype. Not the TED talks. The actual compute. And that question led me somewhere unexpected: quantum computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Quantum Computing Actually Is
&lt;/h2&gt;

&lt;p&gt;Let's clear something up. Quantum computers are not faster regular computers. They're fundamentally different machines that solve fundamentally different problems.&lt;/p&gt;

&lt;p&gt;Classical computers use bits. Each bit is either 0 or 1. You know this. It's how every computer you've ever used works.&lt;/p&gt;

&lt;p&gt;Quantum computers use qubits. A qubit can be 0, 1, or both at the same time through a property called superposition. When qubits interact through entanglement, they can represent and process vastly more information than the same number of classical bits. The math gets weird fast. But the intuition matters: quantum computers don't try every possibility sequentially. They explore probability spaces in ways that classical computers physically cannot replicate.&lt;/p&gt;

&lt;p&gt;This isn't theoretical anymore. &lt;a href="https://quantumai.google/" rel="noopener noreferrer"&gt;Google's Willow chip&lt;/a&gt; demonstrated a 13,000x speedup over the world's fastest supercomputer in October 2025 for a specific quantum algorithm. That's not a rounding error. That's a different category of computation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where We Are in 2026
&lt;/h2&gt;

&lt;p&gt;The quantum computing narrative has shifted from "maybe someday" to "which modality wins."&lt;/p&gt;

&lt;h3&gt;
  
  
  Caltech's 6,100-Qubit Array
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.caltech.edu/about/news/caltech-team-sets-record-with-6100-qubit-array" rel="noopener noreferrer"&gt;Caltech built a neutral-atom array with 6,100 qubits&lt;/a&gt; in September 2025, the largest controlled quantum system ever assembled. The previous record was around a thousand. These qubits maintained coherence for 13 seconds with 99.98% fidelity. Coherence time matters because it determines how much useful computation you can do before quantum effects collapse.&lt;/p&gt;

&lt;h3&gt;
  
  
  IBM's Nighthawk Processor
&lt;/h3&gt;

&lt;p&gt;Running at 120 qubits, &lt;a href="https://www.helpnetsecurity.com/2025/11/12/ibm-quantum-nighthawk-processor/" rel="noopener noreferrer"&gt;achieved a 10x speedup in quantum error correction decoding&lt;/a&gt;. Error correction is the bottleneck. Quantum states are fragile and every interaction with the environment introduces noise. IBM's progress means we're getting better at keeping quantum information intact long enough to compute with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quantinuum's H2 Processor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.quantinuum.com/press-releases/quantinuum-and-microsoft-announce-new-era-in-quantum-computing-with-breakthrough-demonstration-of-reliable-qubits" rel="noopener noreferrer"&gt;Became the first quantum computer to reach Microsoft's Level 2 Resilient phase&lt;/a&gt;. They produced logical qubits with error rates 800x lower than physical rates using just 30 physical qubits to create four logical qubits.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Industry Crossed a Billion Dollars in Revenue
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://quantumcomputingreport.com/ionq-reports-on-its-q3-2025-financial-results-world-record-fidelity-achieved-and-record-cash-position/" rel="noopener noreferrer"&gt;IonQ&lt;/a&gt;, the largest publicly traded pure-play quantum company, hit $39.9M in Q3 2025 revenue, up 221% year-over-year. Governments have committed over $40 billion in national quantum strategies. These are not vaporware numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Drug Discovery Changes Everything
&lt;/h2&gt;

&lt;p&gt;Here's where cancer comes in.&lt;/p&gt;

&lt;p&gt;The pharmaceutical industry has a problem: most AI-designed drug molecules look promising on computers but are nearly impossible to synthesize in labs. This is called downstream attrition. You find a candidate that should work, spend millions developing it, then discover it can't actually be manufactured.&lt;/p&gt;

&lt;p&gt;Classical computers struggle with molecular simulation because the math of quantum mechanics scales exponentially with the number of electrons. Simulating a single caffeine molecule accurately requires classical computing resources that push against the limits of what's physically possible.&lt;/p&gt;

&lt;p&gt;Quantum computers are naturally suited for this. They don't simulate quantum mechanics. They &lt;em&gt;are&lt;/em&gt; quantum mechanics.&lt;/p&gt;

&lt;p&gt;In January 2026, &lt;a href="https://thequantuminsider.com/2026/01/12/researchers-report-quantum-computing-can-accelerate-drug-design/" rel="noopener noreferrer"&gt;PolarisQB released a head-to-head study&lt;/a&gt; showing their quantum annealing platform running on D-Wave hardware outperformed classical generative AI for drug discovery. Classical AI took 40 hours to suggest molecules. The quantum system took 30 minutes. More importantly, the quantum-generated leads were significantly easier to synthesize in the lab.&lt;/p&gt;

&lt;p&gt;In February 2026, &lt;a href="https://www.hpcwire.com/off-the-wire/telefonica-vithas-and-ufv-launch-quantum-computing-project-for-cancer-drug-design/" rel="noopener noreferrer"&gt;Telefónica, Vithas hospitals, and Francisco de Vitoria University launched a quantum computing project&lt;/a&gt; specifically for cancer drug design. They're targeting the BRAF V600E mutation, an altered protein that drives uncontrolled cancer cell growth. The goal is to use quantum algorithms to generate molecules that inhibit this protein. This is not a whitepaper. It's happening now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compute Requirements
&lt;/h2&gt;

&lt;p&gt;I asked: what would it actually take to cure cancer with compute?&lt;/p&gt;

&lt;p&gt;The honest answer is we don't know yet. But we're getting real data points.&lt;/p&gt;

&lt;p&gt;The number of possible drug-like molecules exceeds 10^60. That's more than atoms in the observable universe. Classical computers cannot search this space exhaustively, so they use heuristics, approximations, and machine learning to guess at promising regions. Quantum computers approach this differently. For specific optimization problems, and drug discovery is fundamentally an optimization problem, they can explore solution spaces in ways classical algorithms cannot match.&lt;/p&gt;

&lt;p&gt;Researchers at Caltech published a quantum computing framework in February 2026 specifically for multi-stage drug discovery, covering allosteric site identification, protein-peptide docking, and molecular dynamics. It encodes three quantum algorithms that classical methods struggle to approximate.&lt;/p&gt;

&lt;p&gt;The scale needed for practical advantage keeps shrinking. Google's demonstration used their 105-qubit Willow chip. IBM's roadmap targets 200 logical qubits by 2028. Logical qubits are error-corrected qubits that can run reliable computations. Each logical qubit requires many physical qubits to maintain, but the ratios are improving fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware Reality
&lt;/h2&gt;

&lt;p&gt;Different companies are betting on different qubit technologies, and none have clearly won.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Superconducting Qubits (IBM, Google, Rigetti)&lt;/strong&gt;&lt;br&gt;
Offer fast gate operations but require millikelvin temperatures. IBM operates the largest cloud-accessible quantum fleet and Google achieved the landmark quantum advantage demonstration with this approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trapped Ions (IonQ, Quantinuum)&lt;/strong&gt;&lt;br&gt;
Offer higher fidelity and fully connected qubits at the cost of slower operations. IonQ holds the world record at 99.99% two-qubit gate fidelity and Quantinuum's H2 processor leads in quantum volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neutral Atoms (QuEra, Pasqal, Atom Computing)&lt;/strong&gt;&lt;br&gt;
Can pack thousands of qubits in arrays with longer coherence times. This is the technology behind Caltech's 6,100-qubit breakthrough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Photonic Computing (PsiQuantum, Xanadu)&lt;/strong&gt;&lt;br&gt;
Has room temperature operation potential but harder gate reliability. PsiQuantum raised over $2 billion and is building datacenter-scale quantum compute centers.&lt;/p&gt;

&lt;p&gt;All of these are viable for different applications. The race is still open.&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Research Agenda
&lt;/h2&gt;

&lt;p&gt;I started this with a frustrated comment about AI hype. I ended it with a reading list of quantum physics papers and a new appreciation for how much work happens between the hype cycles.&lt;/p&gt;

&lt;p&gt;The quantum computing industry has moved from "trust us, it's coming" to "here are the benchmarks." The applications have narrowed from "everything" to specific domains where quantum mechanics gives inherent advantages. Drug discovery and materials science lead that list.&lt;/p&gt;

&lt;p&gt;Cancer isn't one disease. It's thousands of molecular malfunctions, each requiring an understanding of specific protein interactions, cellular pathways, and drug binding mechanisms. Classical compute hits walls. Quantum compute might help us climb over them.&lt;/p&gt;

&lt;p&gt;I'm not saying quantum computers will cure cancer. I'm saying the people who might cure cancer are starting to use them. That's not AI hype. That's a research direction worth following.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Andrea Griffiths is a Senior Developer Advocate at GitHub and a breast cancer survivor. She writes &lt;a href="https://mainbranch.dev" rel="noopener noreferrer"&gt;Main Branch&lt;/a&gt;, a newsletter about developer fundamentals.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.caltech.edu/about/news/caltech-team-sets-record-with-6100-qubit-array" rel="noopener noreferrer"&gt;Caltech's 6,100-Qubit Array Announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hpcwire.com/off-the-wire/telefonica-vithas-and-ufv-launch-quantum-computing-project-for-cancer-drug-design/" rel="noopener noreferrer"&gt;Telefónica/Vithas Quantum Cancer Drug Project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://quantumai.google/" rel="noopener noreferrer"&gt;Google Quantum AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://quantumzeitgeist.com/quantum-computing-companies-in-2026-2/" rel="noopener noreferrer"&gt;Quantum Computing Companies in 2026&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>quantumcomputing</category>
      <category>drugdiscovery</category>
      <category>cancerresearch</category>
      <category>googlewillow</category>
    </item>
    <item>
      <title>An Experiment in Voice: What Happens When AI Learns to Write Like You</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:44:10 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/an-experiment-in-voice-what-happens-when-ai-learns-to-write-like-you-36na</link>
      <guid>https://forem.com/andreagriffiths11/an-experiment-in-voice-what-happens-when-ai-learns-to-write-like-you-36na</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferbekl59x7cnpp64za5p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferbekl59x7cnpp64za5p.jpg" alt="An Experiment in Voice — Fine-tuning Qwen3 8B with Unsloth" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  An Experiment in Voice: What Happens When AI Learns to Write Like You
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Fine-tuning Qwen3 8B with Unsloth, and what it taught me about what voice actually is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I fine-tuned a language model on three years of my own writing. Not because I wanted a clone spitting out newsletters while I slept. Not because I thought the world needed more content with my name on it. I did it because I got curious about something specific: what actually happens when you teach an AI system how to sound like a real person?&lt;/p&gt;

&lt;p&gt;Most fine-tuning feels like this. You grab a general model. Point it at a specific domain. Marketing copy, customer support, engineering docs. The model learns the patterns of that domain and gets better at sounding like it belongs there. But it doesn't capture perspective. It doesn't learn the actual choices a human makes when deciding how to explain something.&lt;/p&gt;

&lt;p&gt;I wanted to try something different. So I took Qwen3 8B and trained it on three years of how I actually talk to people about technical stuff. Not the polished version that lands on The GitHub Blog. The real stuff. How I'd onboard someone new to the team. When I'd decide to start with "here's why this exists" instead of jumping to "here's how to use it." The moment where I shift from talking about theory to talking about what actually works. The tone that makes someone feel like you understand what it's like to be new to this and not stupid.&lt;/p&gt;

&lt;p&gt;Newsletters. Conference talks. Conversations with developers learning GitHub for the first time. Issues where I had to explain something five different ways until it finally clicked. The unglamorous moments where communication actually happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Unsloth made this possible
&lt;/h2&gt;

&lt;p&gt;Here's the thing that made this experiment actually feasible: &lt;a href="https://github.com/unslothai/unsloth" rel="noopener noreferrer"&gt;Unsloth&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fine-tuning a language model normally requires serious hardware. You're talking GPU clusters, thousands of dollars, infrastructure that most people don't have access to. Unsloth is a library that optimizes the fine-tuning process so aggressively that you can train on a free Colab T4 GPU. Sixteen gigs of VRAM. That's it.&lt;/p&gt;

&lt;p&gt;I started with Llama 3.1 8B but switched to Qwen3 8B. The base model matters more than anything you do after. Qwen3 is newer, trained on more data, and its instruction-following is significantly better out of the box. Same parameter count, better foundation. That switch alone improved the output more than any hyperparameter tweak I tried.&lt;/p&gt;

&lt;p&gt;Unsloth does this through something called LoRA, which stands for Low-Rank Adaptation. Instead of retraining the entire model's weights (which is expensive and slow), LoRA adds small adapter matrices to specific layers. The base model stays frozen. You're only training the adapters. Think of it like adjusting the instruments on an orchestra instead of replacing the musicians. The orchestra still plays all the same notes. You're just tuning how it sounds.&lt;/p&gt;

&lt;p&gt;I set LoRA rank to 16. That number controls how much capacity those adapters have. Higher rank means more flexibility, more VRAM, more training time. Lower rank means tighter constraints but faster convergence. Rank 16 was the sweet spot for eighty-one examples of my voice. I also set LoRA alpha to 32 (double the rank) so the adapter updates scale correctly during training.&lt;/p&gt;

&lt;p&gt;The target modules were the attention layers: q_proj, k_proj, v_proj, o_proj, plus the feed-forward layers (gate_proj, up_proj, down_proj). Those are the parts of the model that control how it processes and generates language. That's where voice lives.&lt;/p&gt;

&lt;p&gt;Training was sixty steps with a batch size of 2 and gradient accumulation across 4 batches. Learning rate at 1e-4, which is conservative for a small dataset. A higher learning rate would overshoot on eighty-one examples and corrupt the model's general knowledge. Lower and you're training forever.&lt;/p&gt;

&lt;p&gt;The data went through the ChatML template, which is Qwen's format: system prompt, user message, assistant response. Each example was formatted correctly so the model learns not just what to write but how to respond to a specific prompt structure.&lt;/p&gt;

&lt;p&gt;The export was quantized to q4_k_m. That's 4-bit quantization with K-means clustering. The model compresses from full precision (32-bit floats) down to 4-bit integers without much quality loss. Final size: about 5GB. Portable. Runs on a MacBook Pro with 16GB RAM. Local. No API calls. No privacy concerns.&lt;/p&gt;

&lt;p&gt;Without Unsloth, this experiment doesn't happen. You'd need a lab budget or months waiting for cloud infrastructure. Instead, I did it in a free Colab notebook in an afternoon and downloaded the trained model by evening. Total cost: zero dollars and an afternoon.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it captured and what it didn't
&lt;/h2&gt;

&lt;p&gt;The model learned how I write. Not how I think.&lt;/p&gt;

&lt;p&gt;When you give it a technical problem, it mirrors back my sentence structure. It breaks thoughts into separate sentences instead of chaining them together with corporate connectives. It asks what you're actually trying to do before jumping to answers. It flinches away from marketing language like an instinct. It opens with the problem instead of the feature.&lt;/p&gt;

&lt;p&gt;Those are communication choices. Patterns. Style. The model got really good at capturing them.&lt;/p&gt;

&lt;p&gt;What it didn't capture is the thing underneath that actually matters: the ability to change your mind.&lt;/p&gt;

&lt;p&gt;I used to be really convinced about certain stacks. Defended them hard. Built entire arguments around why they were the right choice. Then I changed my mind. I've done it with people too. Teams I thought were solid until they weren't. Technologies I was certain about until I saw something better and had to actually admit I was wrong about the first thing.&lt;/p&gt;

&lt;p&gt;That willingness to be wrong isn't in the training data. It can't be. The model learns from what you've already written. It doesn't learn that you might write something completely different tomorrow because you figured something out today that contradicts what you said yesterday.&lt;/p&gt;

&lt;p&gt;That's the actual gap between a system that learned your voice and a voice that belongs to a real person. You get to change. You get to contradict yourself. You get to look back at what you believed last year and go "yeah, nah, I was wrong about that."&lt;/p&gt;

&lt;p&gt;The model will always sound like 2026 Andrea. You won't be 2026 Andrea forever.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm telling you this
&lt;/h2&gt;

&lt;p&gt;Most companies are fine-tuning AI to sound professional. Polished. On-brand. Safe. But there's something more interesting you can actually do: teach an AI system to communicate the way a real technical authority communicates. Not to think like them. Just to sound like them.&lt;/p&gt;

&lt;p&gt;If you're building documentation, that means AI that matches how you actually talk about things, not some generic "professional" tone that makes everything sound corporate. If you're a developer advocate trying to scale your explanations without losing your actual voice, it means you can get a tool that captures how you really talk about problems. If you're trying to communicate technical ideas at scale without everything turning into corporate mush, teaching voice is a different kind of leverage entirely.&lt;/p&gt;

&lt;p&gt;The model didn't learn my perspective. It learned my patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you can actually do with this
&lt;/h2&gt;

&lt;p&gt;You could use this to write more content faster. Delegate some of the explanation work. Get a communication partner that sounds like you without requiring you to actually sound like someone else.&lt;/p&gt;

&lt;p&gt;But I didn't build this to manufacture content. I built it to understand what voice actually is. To prove that you can teach an AI system to communicate like a real person. To create something that sounds like a technical authority instead of a corporate algorithm.&lt;/p&gt;

&lt;p&gt;The result is useful if you're trying to scale thoughtful explanation. If you need documentation that doesn't sound generic. If you want a partner that talks through problems the way you talk through problems.&lt;/p&gt;

&lt;p&gt;But here's what it won't do: it won't change its mind. It won't wake up tomorrow and realize it was wrong about a framework, a practice, a person. It won't evolve because it learned something new. It won't contradict itself because the evidence demanded it.&lt;/p&gt;

&lt;p&gt;That's the thing worth protecting in yourself. The freedom to be wrong. The courage to change your mind when the evidence shows up. The humility to know that what you're certain about today might be the thing you completely rethink next year.&lt;/p&gt;

&lt;p&gt;Teaching voice to a machine is useful. But your voice matters because it's attached to a mind that keeps changing.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Update — March 17, 2026:&lt;/strong&gt; Unsloth just launched &lt;a href="https://unsloth.ai/docs/new/studio" rel="noopener noreferrer"&gt;Unsloth Studio&lt;/a&gt;, an open-source, no-code web UI for training, running, and exporting models locally. It runs on Mac, Windows, and Linux — no Google Colab notebook required. If you wanted to try the fine-tuning workflow from this article but didn't want to deal with notebook setup, Studio is now the easiest way to get started. Auto-creates datasets from PDFs, CSVs, and docs. Exports to GGUF. All local.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>finetuning</category>
      <category>unsloth</category>
      <category>qwen</category>
    </item>
    <item>
      <title>I Asked My AI Agent to Update My E-Ink Display. It Just Did It.</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Mon, 16 Mar 2026 21:16:01 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/i-asked-my-ai-agent-to-update-my-e-ink-display-it-just-did-it-b5l</link>
      <guid>https://forem.com/andreagriffiths11/i-asked-my-ai-agent-to-update-my-e-ink-display-it-just-did-it-b5l</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep63oyhhkvt7dw9o684p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep63oyhhkvt7dw9o684p.jpg" alt="TRMNL e-ink display" width="795" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;No YAML. No config files. I just said what I wanted.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I have a &lt;a href="https://usetrmnl.com" rel="noopener noreferrer"&gt;TRMNL&lt;/a&gt; on my desk. Small e-ink display — black and white, no notifications, no dopamine tricks. It cycles through plugins that show whatever you tell it to show.&lt;/p&gt;

&lt;p&gt;I wanted one of those plugins to show a daily message from my AI agent, plus my top priorities for the day. Not pulled from GitHub Issues or scraped from a calendar API. Actual priorities — the stuff we'd been working on together.&lt;/p&gt;

&lt;p&gt;So I told my agent: "Every day at noon, push a daily message and my top 3 priorities to my TRMNL display." I gave it the webhook URL.&lt;/p&gt;

&lt;p&gt;That was the entire setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like
&lt;/h2&gt;

&lt;p&gt;Two columns. Left side: a short message from the agent — sometimes motivational, sometimes a callback to something we worked on the day before. Right side: three numbered priorities pulled from our actual conversations and working context.&lt;/p&gt;

&lt;p&gt;The agent knows what matters because it was there while I worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;TRMNL plugins support a webhook strategy. You create a private plugin, get a webhook URL, and anything that POSTs JSON to that URL updates your display.&lt;/p&gt;

&lt;p&gt;The JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"merge_variables"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"agent_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Luna"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Four takes to get a clean demo. The fifth one was honest."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"signature"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"— Luna 🌙"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Mar 16, 2026"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priority_1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Fix Railway env vars"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priority_2"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Test checkout flow end-to-end"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priority_3"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Ship the TRMNL plugins repo"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My agent runs on &lt;a href="https://openclaw.ai" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; and has a cron job that fires daily. It reads its own memory files — notes from our conversations, decisions we made, things still in progress — picks the top priorities, writes a message, and POSTs it.&lt;/p&gt;

&lt;p&gt;You could do the same with &lt;a href="https://docs.github.com/en/copilot/github-copilot-in-the-cli" rel="noopener noreferrer"&gt;GitHub Copilot in the CLI&lt;/a&gt;. Ask it to read your repo context, summarize your priorities, and push to the webhook. One prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What surprised me
&lt;/h2&gt;

&lt;p&gt;I didn't write a config file. No GitHub Action. No cron expression typed into a terminal.&lt;/p&gt;

&lt;p&gt;I said: "Hey, every day at noon, push a daily message and my top 3 priorities to my TRMNL display. Here's the webhook URL."&lt;/p&gt;

&lt;p&gt;The agent set up its own cron, figured out the JSON format, and started pushing. The next morning my display had priorities on it that I hadn't typed anywhere — pulled from things we'd actually discussed the day before.&lt;/p&gt;

&lt;p&gt;That's what an agent does differently. You describe the outcome. It handles the wiring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;I open-sourced the template and instructions: &lt;a href="https://github.com/AndreaGriffiths11/trmnl-plugins" rel="noopener noreferrer"&gt;trmnl-plugins on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;agent-says&lt;/code&gt; plugin works with any AI agent that can make HTTP requests. The README has the exact prompt to give your agent. Setup takes about two minutes — most of that is creating the plugin on TRMNL.&lt;/p&gt;

&lt;p&gt;I also built a &lt;a href="https://github.com/AndreaGriffiths11/trmnl-plugins/tree/main/github-stars" rel="noopener noreferrer"&gt;GitHub Stars plugin&lt;/a&gt; that shows the most starred repos on GitHub. Same webhook pattern, different data. The repo has both. More coming.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;With gratitude,&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Andrea&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trmnl</category>
      <category>aiagents</category>
      <category>openclaw</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Building an AI-Powered Guest Automation with the Copilot SDK</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Fri, 13 Mar 2026 16:14:47 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/building-an-ai-powered-guest-automation-with-the-copilot-sdk-3205</link>
      <guid>https://forem.com/andreagriffiths11/building-an-ai-powered-guest-automation-with-the-copilot-sdk-3205</guid>
      <description>&lt;h1&gt;
  
  
  Building an AI-Powered Guest Automation with the Copilot SDK
&lt;/h1&gt;

&lt;p&gt;Every week on Open Source Friday, we host open source maintainers on our livestream. When a guest gets confirmed, someone on the team labels the issue to kick off an action that writes a welcome message and shares prep instructions. From there, we manually create event posts and design a thumbnail (if you can call my lack of design sensibility that).&lt;/p&gt;

&lt;p&gt;The problem? Starting from scratch every time meant we'd fall back to the same template:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"You're officially scheduled for Open Source Friday! The stream starts at 1:00 PM ET. Please join at 12:45 PM ET for prep and tech checks."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It worked. But it was boring, and we had better material sitting right there in the issue. Each guest submission included their project details, background, and what they wanted to talk about. We just weren't using it.&lt;/p&gt;

&lt;p&gt;So I built an automation that reads their submission and writes a personalized draft. When a guest mentions they're excited to talk about their CLI tool that helps developers debug faster, the workflow catches that and generates:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We can't wait to hear about your CLI tool and how it's helping developers debug faster. Your approach to solving this problem sounds like exactly what our audience needs to hear about."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not perfect, but a heck of a lot better than starting with a blank comment box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I wanted here was not to replace the human attention. Just give us a better starting point.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;A GitHub Actions workflow that triggers when we add the &lt;code&gt;scheduled&lt;/code&gt; label to a guest's issue. It automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parses the guest submission (name, GitHub handle, project, and bio)&lt;/li&gt;
&lt;li&gt;Generates a personalized welcome message draft using the Copilot SDK&lt;/li&gt;
&lt;li&gt;Creates a promotional thumbnail using Puppeteer to automate our thumbnail generator&lt;/li&gt;
&lt;li&gt;Posts everything as a scaffolded comment that we can review and personalize further&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The automation doesn't replace the human review. It gives us a draft that already pulls from what they wrote, so we spend our time refining instead of starting from a blank comment box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Copilot SDK?
&lt;/h2&gt;

&lt;p&gt;I wanted the automation to pull out the interesting bits from each submission and give us a draft that actually sounds human. The Copilot SDK makes this possible without building all the infrastructure myself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/github/copilot-sdk" rel="noopener noreferrer"&gt;&lt;strong&gt;The SDK is in technical preview&lt;/strong&gt;&lt;/a&gt;, and it gives you programmatic access to GitHub Copilot through a simple JavaScript API. No need to deal with raw HTTP requests or wire up authentication yourself.&lt;/p&gt;

&lt;p&gt;Here's what makes it useful:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-turn conversations.&lt;/strong&gt; Sessions maintain context across multiple requests. If you wanted to build something more complex than a single prompt-response (like a chatbot that remembers previous questions), the SDK handles that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom tools.&lt;/strong&gt; You can define functions that Copilot can invoke during conversations. For example, you could give it a tool to fetch GitHub issues or query your database, and Copilot decides when to use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle management.&lt;/strong&gt; The SDK starts the Copilot CLI, which authenticates using the same mechanisms as the CLI itself (interactive login or environment‑provided tokens such as GH_TOKEN/GITHUB_TOKEN with the right permissions), and then proxies your requests to GitHub Copilot.&lt;/p&gt;

&lt;p&gt;For this automation, I'm using the simplest possible pattern: one prompt in, one response out. The prompt does all the heavy lifting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promptLines&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are writing on behalf of the Open Source Friday team...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Guest details:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;- Name: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;guestName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;- Project: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;- About them (in their own words): &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;guestBackground&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Write a 2-3 sentence welcome message that:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1. Greets them by name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2. Matches their energy — if playful, be playful back; if formal, be warm but professional&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;3. References something specific they said about themselves&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4. Shows genuine excitement about having them on the stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That instruction to "match their energy" is what makes the difference. The workflow reads what the guest wrote and mirror their tone back. From there, we tweak if needed or ship it as-is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;p&gt;The Copilot SDK is available in Node.js, Python, Go, and .NET. We went with the Node.js implementation since Puppeteer (needed for thumbnail generation) already required Node in the workflow. One runtime, one install step, cleaner setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Guest issue gets "scheduled" label
    ↓
GitHub Actions workflow triggers
    ↓
1. Parse issue → extract guest data
2. Copilot SDK → generate personalized message
3. Puppeteer → create thumbnail
4. Post comment with everything
    ↓
Team reviews and tweaks
    ↓
Guest gets personalized welcome
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: Parse the issue
&lt;/h3&gt;

&lt;p&gt;Guests submit through a GitHub issue form. The workflow extracts their info with regex:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nameMatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/### Name&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\n\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;([^\n]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleMatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/### GitHub Handle&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\n\s&lt;/span&gt;&lt;span class="sr"&gt;*@&lt;/span&gt;&lt;span class="se"&gt;?([^\n\s]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;projectMatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/### Project Name&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\n\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;([^\n]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing fancy here. Issue forms follow a consistent format, so regex gets us what we need. Copilot helped me write the regex, one less thing to debug.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Call the Copilot SDK
&lt;/h3&gt;

&lt;p&gt;Under the hood, the SDK starts the Copilot CLI in server mode and talks to it over JSON‑RPC. You install the CLI once, then add the SDK to your project, and the SDK manages the CLI process lifecycle for you.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;npm install -g @github/copilot&lt;/span&gt;
    &lt;span class="s"&gt;npm install @github/copilot-sdk puppeteer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how they work together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your code (@github/copilot-sdk)
    ↓
Copilot CLI (handles auth, spawns server)
    ↓
GitHub Copilot API (the actual AI)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The SDK spawns the CLI, which authenticates using your &lt;code&gt;GH_TOKEN&lt;/code&gt; and proxies requests to Copilot. Then it's just a few lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CopilotClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@github/copilot-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CopilotClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createSession&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendAndWait&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;personalizedMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;destroy&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The SDK handles sessions and cleanup. I pass the prompt and get back a draft message.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Generate thumbnail
&lt;/h3&gt;

&lt;p&gt;Using Puppeteer, the workflow opens our thumbnail generator, fills in the guest's info, and captures the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;headless&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://andreagriffiths11.github.io/thumbnail-gen/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#guestName&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;guestName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#generateBtn&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Puppeteer is basically browser automation. It opens a headless browser (no GUI), navigates to our thumbnail generator, fills in the form fields, and extracts the canvas data. This beats manually making 50+ thumbnails a year.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Post the comment
&lt;/h3&gt;

&lt;p&gt;Everything gets assembled into a scaffolded comment with the AI-generated welcome message, thumbnail download link, prep checklist, and guest guide. We review it, make any tweaks, and the guest gets a welcome that shows we actually read their submission.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kicking it off
&lt;/h3&gt;

&lt;p&gt;One command triggers everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh issue edit 195 &lt;span class="nt"&gt;--repo&lt;/span&gt; githubevents/open-source-friday &lt;span class="nt"&gt;--add-label&lt;/span&gt; &lt;span class="s2"&gt;"scheduled"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just click the label in the GitHub UI. Either way, we get a scaffolded welcome comment within minutes that we can review and send.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Separate your steps.&lt;/strong&gt; Parsing, AI generation, and thumbnail creation are separate workflow steps. One failure doesn't break everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt engineering is real work.&lt;/strong&gt; Getting the tone right took iteration. The breakthrough was telling Copilot to match the guest's energy, not just acknowledge their project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test with workflow_dispatch.&lt;/strong&gt; Adding a manual trigger made debugging way easier than waiting for label events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;issues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;labeled&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;issue_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Issue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;with'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can test without spamming your actual issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI is a starting point, not the finish line.&lt;/strong&gt; The automation gives us a draft that's already personalized to the guest's submission. We still review every message before it goes out, but now we're refining instead of writing from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Auto-generated social media copy drafts for promoting episodes&lt;/li&gt;
&lt;li&gt;Calendar invite automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/github/copilot-sdk" rel="noopener noreferrer"&gt;Copilot SDK&lt;/a&gt; opens up possibilities for automation that gives you a head start on tasks that need a human touch. If you have repetitive communication tasks (onboarding, responses, summaries) where you want to be personal but don't always have time to start from scratch, this pattern might help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key isn't replacing human attention.&lt;/strong&gt; It's giving yourself a better starting point. You'll review everything. You'll still add personality. But instead of staring at a canned message, you're starting from something already grounded in who they are. That distinction matters. The automation reads the submission, pulls out the interesting bits, and gives you a draft. You make it yours.&lt;/p&gt;

&lt;p&gt;The code lives in &lt;a href="https://github.com/githubevents/open-source-friday" rel="noopener noreferrer"&gt;githubevents/open-source-friday&lt;/a&gt; if you want to see the full workflow.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;: Andrea Griffiths is a Senior Developer Advocate at GitHub, where she helps engineering teams adopt and scale developer technologies. She's passionate about making technical concepts accessible—to both humans and AI agents. Connect with her on &lt;a href="https://linkedin.com/in/acolombiadev" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://github.com/andreagriffiths11" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, or &lt;a href="https://twitter.com/acolombiadev" rel="noopener noreferrer"&gt;Twitter/X&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>githubactions</category>
      <category>automation</category>
      <category>copilotsdk</category>
    </item>
    <item>
      <title>Why AI Discoverability Matters: Optimizing Your Website for This Generation of Search</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Fri, 13 Mar 2026 16:14:37 +0000</pubDate>
      <link>https://forem.com/andreagriffiths11/why-ai-discoverability-matters-optimizing-your-website-for-this-generation-of-search-2jni</link>
      <guid>https://forem.com/andreagriffiths11/why-ai-discoverability-matters-optimizing-your-website-for-this-generation-of-search-2jni</guid>
      <description>&lt;h1&gt;
  
  
  The Search Landscape Has Changed
&lt;/h1&gt;

&lt;p&gt;When was the last time you opened Google to find an answer? Most developers I know don't anymore. They ask ChatGPT, Claude, or Perplexity. The way people discover content online has fundamentally shifted, and if your website isn't optimized for AI agents, you're basically invisible.&lt;/p&gt;

&lt;p&gt;In 2024-2025, AI-powered search went mainstream. ChatGPT added web search. Claude started analyzing websites in real-time. Perplexity built an entire search engine around AI answers with citations. Google threw AI overviews into search results. The question isn't whether AI search matters—it's whether your content will be found when someone asks.&lt;/p&gt;

&lt;p&gt;I've been thinking about it a LOT. When a developer asks Claude "Who are the leading voices in developer advocacy?" or a recruiter asks ChatGPT "Find me experts in AI-assisted development," your website either shows up as a cited source or it doesn't exist. There's no middle ground.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Being invisible to AI is becoming invisible, period.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  How AI Agents Actually Discover Content
&lt;/h1&gt;

&lt;p&gt;Traditional search engines like Google use crawlers that index your HTML, analyze your links, and rank based on hundreds of signals. AI agents? They work completely differently. Here's what they actually need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Content they can quickly understand&lt;/strong&gt; without parsing nested &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; tags and CSS classes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured information&lt;/strong&gt; about who you are, what you actually do, and why anyone should care&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credibility signals&lt;/strong&gt; that are consistent across your metadata&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear citations&lt;/strong&gt; with proper titles, descriptions, and URLs they can point to&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The thing is, most websites are built for human eyes. Your beautiful hero section with animated gradients? Stunning. To an AI agent, it's incomprehensible markup. Your carefully crafted "About Me" story split across five components? Good luck extracting that coherently. The information is there, but it's not AI-accessible.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Wake-Up Call: My Own AI Audit
&lt;/h1&gt;

&lt;p&gt;I realized this problem when I asked Claude to summarize my own developer advocacy work. I have a portfolio website. I have blog posts everywhere. I have conference talks listed across the web. But when I asked Claude the basic questions someone hiring a speaker would ask, it struggled. It could find my GitHub profile and a few scattered articles, but it couldn't answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What are Andrea's main projects?"&lt;/li&gt;
&lt;li&gt;"What topics does she speak about?"&lt;/li&gt;
&lt;li&gt;"How can I contact her for a speaking engagement?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The information existed. It was literally on my website. But it wasn't AI-accessible. That's when I decided to rebuild my online presence with AI discoverability as a first-class citizen, not an afterthought.&lt;/p&gt;

&lt;h1&gt;
  
  
  What I Actually Built
&lt;/h1&gt;

&lt;p&gt;Here's exactly what I implemented to make my portfolio discoverable by ChatGPT, Claude, Perplexity, and every other AI agent crawling the web:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Created Markdown Endpoints
&lt;/h3&gt;

&lt;p&gt;The biggest breakthrough was realizing something simple: AI models are trained on text. They speak Markdown natively. So I built dedicated endpoints that serve my content in clean, hierarchical Markdown instead of HTML.&lt;/p&gt;

&lt;p&gt;Two key endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/index.md&lt;/code&gt; - My complete professional profile, featured projects, talks, and writings in Markdown&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/writings.md&lt;/code&gt; - All my articles with descriptions, tags, and reading times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These endpoints pull from the same data sources as my HTML pages but serve it in a format AI agents can parse instantly. When ChatGPT or Claude crawls my site, they get structured, semantic content instead of HTML soup.&lt;/p&gt;

&lt;p&gt;Technically, it's straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple API routes that fetch from GitHub Blog API, Dev.to API, Sessionize&lt;/li&gt;
&lt;li&gt;Format everything as clean, hierarchical Markdown&lt;/li&gt;
&lt;li&gt;Cache responses for 1 hour to reduce server load&lt;/li&gt;
&lt;li&gt;Return proper &lt;code&gt;Content-Type: text/markdown&lt;/code&gt; headers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The payoff? AI agents can understand my entire career in milliseconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Explicitly Welcomed AI Crawlers (Not Everyone Does This)
&lt;/h3&gt;

&lt;p&gt;Here's something that surprised me: most sites accidentally block AI crawlers. Overly strict robots.txt rules. Rate limiting that catches bots too. You're essentially shooting yourself in the foot.&lt;/p&gt;

&lt;p&gt;I updated my robots.txt to explicitly welcome every major AI crawler and point them directly to my Markdown endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# AI Bots and Crawlers - Full access including Markdown endpoints
&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;GPTBot&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /
&lt;span class="n"&gt;Allow&lt;/span&gt;: /&lt;span class="n"&gt;index&lt;/span&gt;.&lt;span class="n"&gt;md&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /&lt;span class="n"&gt;writings&lt;/span&gt;.&lt;span class="n"&gt;md&lt;/span&gt;

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;ChatGPT&lt;/span&gt;-&lt;span class="n"&gt;User&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;ClaudeBot&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;anthropic&lt;/span&gt;-&lt;span class="n"&gt;ai&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;PerplexityBot&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;CCBot&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /

&lt;span class="n"&gt;User&lt;/span&gt;-&lt;span class="n"&gt;agent&lt;/span&gt;: &lt;span class="n"&gt;cohere&lt;/span&gt;-&lt;span class="n"&gt;ai&lt;/span&gt;
&lt;span class="n"&gt;Allow&lt;/span&gt;: /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That CCBot from Common Crawl? Don't sleep on it. Its dataset powers a ton of AI training runs and research projects. Explicitly allowing it means your content gets into those datasets.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Added Rich Structured Data (So AI Agents Know What They're Reading)
&lt;/h3&gt;

&lt;p&gt;AI agents love structured data. It's like giving them a cheat sheet. I implemented Schema.org JSON-LD markup across the site:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Person Schema&lt;/strong&gt;: My name, job title, organization, what I know about (skills), social profiles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organization Schema&lt;/strong&gt;: Brand consistency across platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSite Schema&lt;/strong&gt;: Site metadata and description&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Schema&lt;/strong&gt;: Speaking engagements with dates, locations, whether it's in-person or hybrid&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article Schema&lt;/strong&gt;: Blog posts with authors, publish dates, clear descriptions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structured data serves double duty: it helps traditional search engines display rich snippets, but more importantly, it gives AI agents verifiable, machine-readable facts about my work. No guessing. No parsing ambiguity.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Optimized for Speed (Because AI Crawlers Have Limited Patience)
&lt;/h3&gt;

&lt;p&gt;Slow site? Incomplete indexing. AI crawlers will give up if your site takes forever to load. I implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image optimization&lt;/strong&gt;: WebP format with responsive srcset cuts file size by 92-97%&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Font optimization&lt;/strong&gt;: Self-hosted fonts with &lt;code&gt;font-display: swap&lt;/code&gt; so text renders immediately&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lazy loading&lt;/strong&gt;: Below-the-fold images and iframes only load when needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile-first JavaScript&lt;/strong&gt;: Desktop-only components don't load on mobile using &lt;code&gt;client:media&lt;/code&gt;, thanks Astro&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart caching&lt;/strong&gt;: 1-hour cache on Markdown endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result? My homepage loads in under 1 second. Faster crawling, better indexing.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Built a Dual-Format Strategy
&lt;/h3&gt;

&lt;p&gt;Here's what most people miss: &lt;strong&gt;You don't replace your beautiful website. You augment it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Humans still see the beautiful, interactive portfolio with animations and responsive design. The experience is great. But AI agents can request &lt;code&gt;/index.md&lt;/code&gt; and get a clean, comprehensive profile in milliseconds.&lt;/p&gt;

&lt;p&gt;It's like having both a visual resume and a plain-text ATS-friendly resume. Same information. Different audience. Each optimized for what that audience actually needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can have both. You should have both.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The Results
&lt;/h1&gt;

&lt;p&gt;How do you know if this works? I tested by asking AI assistants about my work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before optimization&lt;/strong&gt;: "I can see Andrea Griffiths has a GitHub profile... There's a blog post from 2024... Let me search for more information..."&lt;/p&gt;

&lt;p&gt;Vague. Incomplete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After optimization&lt;/strong&gt;: "Andrea Griffiths is a Senior Developer Advocate at GitHub specializing in AI-assisted development workflows, developer tools, and team leadership. Her notable projects include Team X-Ray, a VS Code extension for revealing team expertise, and the From Pair to Peer AI Leadership Framework. She's spoken at GitHub Universe, Netlify Compose, and JFrog SwampUP..."&lt;/p&gt;

&lt;p&gt;Night and day.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why This Matters for Your Career
&lt;/h1&gt;

&lt;p&gt;If you're a developer, designer, writer, or any professional with an online presence, AI discoverability isn't optional anymore.&lt;/p&gt;

&lt;p&gt;Recruiters, conference organizers, potential clients—they're all using AI assistants to research people. When they ask "Find me an expert in Kubernetes security" or "Who should I invite to speak about TypeScript best practices?", your name either comes up or it doesn't. No second chances.&lt;/p&gt;

&lt;p&gt;In the SEO era, backlinks were currency. In the AI era, citations are everything. Every time an AI assistant cites your website, it's validating your expertise, driving qualified traffic, and building your reputation in AI training datasets. Unlike humans who visit once and leave, AI agents constantly recrawl and update their understanding—becoming persistent advocates for your work in thousands of conversations you'll never see.&lt;/p&gt;

&lt;p&gt;That's free marketing at scale. And with AI search accelerating fast (SearchGPT, AI Overviews, Perplexity), optimizing now means you're ahead of the curve.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started: Your Checklist
&lt;/h1&gt;

&lt;p&gt;Ready to make your website AI-discoverable? Here's where to start, broken into realistic effort levels:&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Wins:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Update your robots.txt to explicitly allow AI bots (GPTBot, ClaudeBot, CCBot, PerplexityBot)&lt;/li&gt;
&lt;li&gt;[ ] Add basic structured data (Person schema with your name, job title, and social links)&lt;/li&gt;
&lt;li&gt;[ ] Create a simple &lt;code&gt;/about.md&lt;/code&gt; endpoint with your bio in Markdown&lt;/li&gt;
&lt;li&gt;[ ] Test by asking ChatGPT or Claude "What do you know about [your name]?" and see what comes back&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Medium Effort:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Implement comprehensive JSON-LD structured data across your site&lt;/li&gt;
&lt;li&gt;[ ] Create Markdown endpoints for your main content (projects, blog posts, talks)&lt;/li&gt;
&lt;li&gt;[ ] Optimize images with modern formats (WebP, AVIF) and lazy loading&lt;/li&gt;
&lt;li&gt;[ ] Self-host fonts and add proper caching headers&lt;/li&gt;
&lt;li&gt;[ ] Generate or verify your sitemap is complete&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Monitor AI citations by periodically testing different AI assistants&lt;/li&gt;
&lt;li&gt;[ ] Keep structured data updated when you publish new content&lt;/li&gt;
&lt;li&gt;[ ] Experiment with different Markdown formats to see what AI agents parse best&lt;/li&gt;
&lt;li&gt;[ ] Track referral traffic from AI search engines in your analytics&lt;/li&gt;
&lt;li&gt;[ ] Join communities discussing AI SEO so you stay ahead of best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Dual-Format Future
&lt;/h2&gt;

&lt;p&gt;The web is evolving to serve two audiences: humans who want beauty and interactivity, and AI agents who want structure and semantics. The websites that thrive will master both.&lt;/p&gt;

&lt;p&gt;Your beautiful portfolio with smooth animations and stunning design? Keep it. That's for the human who wants to feel your personality and style.&lt;/p&gt;

&lt;p&gt;But also create clean, structured pathways for AI agents to understand your expertise, cite your work, and recommend you to millions of people asking questions you'll never hear.&lt;/p&gt;

&lt;p&gt;Because in 2026 and beyond, being found doesn't just mean ranking on Google's first page. It means being the answer when someone asks an AI, "Who's an expert in this?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make sure that expert is you.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources &amp;amp; Further Reading
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://schema.org/" rel="noopener noreferrer"&gt;Schema.org Documentation&lt;/a&gt; - Learn about structured data types&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://search.google.com/test/rich-results" rel="noopener noreferrer"&gt;Google's Rich Results Test&lt;/a&gt; - Validate your structured data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://platform.openai.com/docs/gptbot" rel="noopener noreferrer"&gt;OpenAI's GPTBot Documentation&lt;/a&gt; - Official crawler docs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.anthropic.com/policies/web-crawler" rel="noopener noreferrer"&gt;Anthropic's Claude Web Crawler&lt;/a&gt; - How ClaudeBot works&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://commoncrawl.org/" rel="noopener noreferrer"&gt;Common Crawl&lt;/a&gt; - The dataset powering many AI models&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Want to see how I actually built this? Check out my website at &lt;a href="https://andreagriffiths.dev/" rel="noopener noreferrer"&gt;andreagriffiths.dev&lt;/a&gt; or dive into the &lt;a href="https://andreagriffiths.dev/index.md" rel="noopener noreferrer"&gt;Markdown endpoint&lt;/a&gt; that makes it all work.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;: Andrea Griffiths is a Senior Developer Advocate at GitHub, where she helps engineering teams adopt and scale developer technologies. She's passionate about making technical concepts accessible—to both humans and AI agents. Connect with her on &lt;a href="https://linkedin.com/in/acolombiadev" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://github.com/andreagriffiths11" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, or &lt;a href="https://twitter.com/acolombiadev" rel="noopener noreferrer"&gt;Twitter/X&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>devrel</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
