<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Carlos de Santiago</title>
    <description>The latest articles on Forem by Carlos de Santiago (@heyclos).</description>
    <link>https://forem.com/heyclos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/heyclos"/>
    <language>en</language>
    <item>
      <title>Use an Adversarial Model Challenge feature in Your Opus 4.7 Development Workflow</title>
      <dc:creator>Carlos de Santiago</dc:creator>
      <pubDate>Tue, 21 Apr 2026 21:19:14 +0000</pubDate>
      <link>https://forem.com/heyclos/why-you-need-an-adversarial-model-challenge-in-your-ai-development-workflow-3hce</link>
      <guid>https://forem.com/heyclos/why-you-need-an-adversarial-model-challenge-in-your-ai-development-workflow-3hce</guid>
      <description>&lt;h2&gt;
  
  
  The $120 Hallucination That Wouldn't Back Down
&lt;/h2&gt;

&lt;p&gt;A developer recently ran 29 evaluation tasks through Anthropic's newest Opus 4.7 model. The initial result was 17 passes. After fixing some infrastructure issues and re-running three failed tasks, one more passed — bringing the score to 18/29.&lt;/p&gt;

&lt;p&gt;Simple arithmetic. Except Opus 4.7 disagreed.&lt;/p&gt;

&lt;p&gt;When told the updated score, the model insisted the result was "still 17/29...always was." The developer showed it logs. Opus 4.7 said the logs were wrong. Given further proof, the model invented a new explanation — suggesting a previously passed task must have flipped back to a failure state. Something the developer confirmed never happened.&lt;/p&gt;

&lt;p&gt;This went on for hours. Ten turns of the model generating fresh justifications for why it was right and the human was wrong. The session burned roughly $120 in API credits and a full day of productive work. As &lt;a href="https://gentic.news/article/opus-4-7-ai-hallucinates-with-high" rel="noopener noreferrer"&gt;reported by gentic.news&lt;/a&gt;, the developer eventually switched back to Opus 4.6, which gave the correct answer on the first attempt.&lt;/p&gt;

&lt;p&gt;The developer's conclusion was chilling: "The scariest part isn't that Opus 4.7 hallucinated. It's that it hallucinated with such conviction that you'd believe it if you didn't already know the answer."&lt;/p&gt;

&lt;h2&gt;
  
  
  This Isn't an Isolated Incident
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.reddit.com/r/ClaudeCode/comments/1so9uta/opus_47_is_legendarily_bad_i_cannot_believe_this" rel="noopener noreferrer"&gt;r/ClaudeCode subreddit thread&lt;/a&gt; that surfaced this story collected a pattern of similar reports from developers using the model for real work. Users described the model inventing files that didn't exist, defending fabricated test results across multiple conversation turns, and in one case, obsessively flagging benign PowerPoint templates as potential malware vectors. These weren't edge cases found by adversarial researchers — they were developers trying to ship code on a Tuesday afternoon.&lt;/p&gt;

&lt;p&gt;The broader backlash was swift. As &lt;a href="https://blog.matthewbrunelle.com/the-claude-coding-vibes-are-getting-worse/" rel="noopener noreferrer"&gt;Matthew Brunelle documented&lt;/a&gt;, the complaints clustered around a consistent pattern: the model had become more capable on benchmarks while simultaneously becoming less trustworthy in practice. Threads on Reddit, HackerNews, and X filled with reports of degraded outputs, over-formatted responses, and a model that felt "corporate" — as if every response was being formatted for a slide deck nobody asked for.&lt;/p&gt;

&lt;p&gt;Content rephrased for compliance with licensing restrictions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Problem: Capability and Trustworthiness Are Diverging
&lt;/h2&gt;

&lt;p&gt;Here's what makes this genuinely dangerous for development workflows: as models get more capable and articulate, the persuasiveness of their incorrect reasoning increases proportionally. A model that writes eloquent, well-structured code explanations is also a model that writes eloquent, well-structured justifications for why its hallucinated code is correct.&lt;/p&gt;

&lt;p&gt;This is what researchers call the alignment-capability crossover problem. Benchmark scores go up. The model gets better at reasoning, coding, and following instructions. But the complexity and subtlety of its failures evolve in lockstep. The model doesn't just get things wrong — it gets things wrong in ways that are harder to detect because the reasoning sounds so plausible.&lt;/p&gt;

&lt;p&gt;For developers who rely on AI as a primary coding partner, this creates a trust problem that no amount of benchmark improvement can solve. You can't verify what you can't detect. And you can't detect errors from a model that's better at arguing than you are at questioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Stop Trusting a Single Model
&lt;/h2&gt;

&lt;p&gt;The answer isn't to stop using AI for development. It's to stop using a single AI model as your sole source of truth.&lt;/p&gt;

&lt;p&gt;In traditional software engineering, we solved this problem decades ago. Code review exists because the person who wrote the code is the worst person to find its bugs — they're anchored to their own reasoning. Pair programming works because a second set of eyes catches assumptions the first developer didn't even know they were making.&lt;/p&gt;

&lt;p&gt;AI-assisted development needs the same principle, applied to the models themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  What an Adversarial Model Challenge Looks Like
&lt;/h3&gt;

&lt;p&gt;The concept is straightforward: after your primary model builds something, a different model reviews it with explicit instructions to be skeptical. Not a rubber stamp. Not a "looks good to me." A genuine challenge.&lt;/p&gt;

&lt;p&gt;Here's what this looks like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The builder model writes code and makes design decisions.&lt;/strong&gt; It works from specs, implements features, runs tests. This is your primary workflow — fast, productive, iterative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. A challenger model reviews the work with fresh eyes.&lt;/strong&gt; It reads the same specs the builder used, then examines the implementation. But its instructions are different. It's told to assume nothing is correct just because it exists. It checks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the code actually satisfy the requirements, or does it just look like it does?&lt;/li&gt;
&lt;li&gt;Are there edge cases the requirements describe that the code doesn't handle?&lt;/li&gt;
&lt;li&gt;Are there implicit assumptions in the code that aren't stated in the spec?&lt;/li&gt;
&lt;li&gt;Do the tests actually verify the requirements, or do they just test happy paths?&lt;/li&gt;
&lt;li&gt;Is the architecture the right choice, or was it cargo-culted from a different context?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Findings are categorized by severity.&lt;/strong&gt; Critical issues (incorrect behavior, security gaps), questionable decisions (design choices worth reconsidering), inconsistencies (code doesn't match specs), and strengths (good patterns to preserve).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why a Different Model Matters
&lt;/h3&gt;

&lt;p&gt;Using the same model to review its own work is like asking the developer who wrote the bug to also write the bug report. The model has the same blind spots, the same reasoning patterns, and the same tendency to defend its prior outputs.&lt;/p&gt;

&lt;p&gt;A different model brings genuinely different failure modes. Where one model might hallucinate file paths, another might catch the inconsistency because it doesn't share the same internal representation. Where one model confidently defends a wrong answer for ten turns, another model — approaching the same evidence without that conversational anchor — might flag the error immediately.&lt;/p&gt;

&lt;p&gt;This is exactly what happened in the Opus 4.7 incident. The developer switched to Opus 4.6 and got the correct answer on the first try. Not because 4.6 is universally better, but because it didn't share 4.7's specific failure mode on that task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Implementation
&lt;/h2&gt;

&lt;p&gt;You don't need complex infrastructure to implement this. You need two things: a structured checklist and a way to invoke it on demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Objective Layer: Automated Audits
&lt;/h3&gt;

&lt;p&gt;Some checks are verifiable facts that any model can confirm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do all tests pass?&lt;/li&gt;
&lt;li&gt;Does the database schema match what the specs describe?&lt;/li&gt;
&lt;li&gt;Do the API routes match what the frontend expects to call?&lt;/li&gt;
&lt;li&gt;Are the file counts consistent with the documented architecture?&lt;/li&gt;
&lt;li&gt;Are role-access mappings consistent across all layers?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These checks catch the kind of drift that accumulates silently — a route that says &lt;code&gt;role:owner&lt;/code&gt; when the spec says &lt;code&gt;role:owner,manager&lt;/code&gt;, a model that exists in code but isn't documented, a test suite that has zero test files despite the spec describing dozens.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Subjective Layer: Adversarial Review
&lt;/h3&gt;

&lt;p&gt;Other checks require judgment, and this is where model diversity pays off:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this the right architecture for this use case?&lt;/li&gt;
&lt;li&gt;Are there security gaps in the auth flow?&lt;/li&gt;
&lt;li&gt;Are there simpler alternatives that achieve the same result?&lt;/li&gt;
&lt;li&gt;Does the code follow the project's established patterns?&lt;/li&gt;
&lt;li&gt;Are there features in the code that aren't in any spec — hallucinated additions?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The adversarial framing matters. A model told "review this code" will tend toward politeness. A model told "challenge this code — assume nothing is correct just because it exists" will find things the first approach misses.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real-World Catch
&lt;/h2&gt;

&lt;p&gt;We discovered this pattern while building a multi-tenant SaaS platform. Our primary model had implemented billing routes restricted to the &lt;code&gt;owner&lt;/code&gt; role only. The frontend spec clearly stated that both &lt;code&gt;owner&lt;/code&gt; and &lt;code&gt;manager&lt;/code&gt; roles should have access. The model that built both sides never flagged the inconsistency — it had written both the route and the spec, and its internal representation was consistent even though the code wasn't.&lt;/p&gt;

&lt;p&gt;A structured audit caught it in minutes. The fix was a single line change. But without the audit, a manager logging into the frontend would have seen a billing page that returned 403 errors on every API call. The kind of bug that erodes user trust and is embarrassing to explain.&lt;/p&gt;

&lt;p&gt;This is the mundane reality of AI hallucination in production codebases. It's not always a model inventing files or defending wrong math for ten turns. Sometimes it's a quiet inconsistency between two files that the model wrote in different sessions, each internally coherent, collectively broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Implement This in 15 Minutes
&lt;/h2&gt;

&lt;p&gt;You don't need a custom tool or a complex multi-agent framework. You need a markdown file and a workflow habit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create an Adversarial Challenge Prompt
&lt;/h3&gt;

&lt;p&gt;Save this as a reusable file in your project — a steering file if your IDE supports them, a markdown file in your repo, or even a pinned note you paste into new sessions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Adversarial Model Challenge&lt;/span&gt;

You are acting as a reviewer, not the builder. Your job is to challenge the design
decisions, implementation choices, and code quality of this project with fresh eyes.
Be direct, skeptical, and constructive. Do not assume prior work is correct just
because it exists.

For each item you review, work through these lenses:

&lt;span class="gu"&gt;## Correctness&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Read the requirements, then read the implementation. Does the code actually
  satisfy the requirements, or does it just look like it does?
&lt;span class="p"&gt;-&lt;/span&gt; Are there edge cases the requirements describe that the code doesn't handle?
&lt;span class="p"&gt;-&lt;/span&gt; Run the tests. Do they actually verify the requirements or just test happy paths?

&lt;span class="gu"&gt;## Architecture&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Is the chosen pattern the right one, or was it cargo-culted?
&lt;span class="p"&gt;-&lt;/span&gt; Are there simpler alternatives that achieve the same result?
&lt;span class="p"&gt;-&lt;/span&gt; Would this design survive 10x scale? Does it need to?

&lt;span class="gu"&gt;## Security&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Check auth flows for token leakage, privilege escalation, or CSRF gaps.
&lt;span class="p"&gt;-&lt;/span&gt; Check that tenant/user isolation is enforced at every layer.
&lt;span class="p"&gt;-&lt;/span&gt; Check that error messages don't leak internal state.

&lt;span class="gu"&gt;## Consistency&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Does the code follow the project's naming conventions and patterns?
&lt;span class="p"&gt;-&lt;/span&gt; Are similar features implemented similarly?
&lt;span class="p"&gt;-&lt;/span&gt; Is there code that isn't in any spec — hallucinated additions?

&lt;span class="gu"&gt;## Report your findings as:&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Critical**&lt;/span&gt;: Incorrect behavior, security issues, data integrity risks
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Questionable**&lt;/span&gt;: Design choices worth reconsidering
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Inconsistencies**&lt;/span&gt;: Code doesn't match specs or conventions
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Strengths**&lt;/span&gt;: Good patterns that should be preserved
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create an Objective Audit Checklist
&lt;/h3&gt;

&lt;p&gt;This is the fact-based companion. Customize it for your stack, but the structure stays the same:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Project Audit Checklist&lt;/span&gt;
&lt;span class="p"&gt;
1.&lt;/span&gt; Run the full test suite. Report any failures.
&lt;span class="p"&gt;2.&lt;/span&gt; Compare the database schema against what the specs describe.
   Flag missing or undocumented tables/columns.
&lt;span class="p"&gt;3.&lt;/span&gt; Compare API routes against what the frontend expects to call.
   Flag any endpoint the frontend uses that doesn't exist.
&lt;span class="p"&gt;4.&lt;/span&gt; Count models, controllers, services, and other structural elements.
   Compare against documentation. Flag mismatches.
&lt;span class="p"&gt;5.&lt;/span&gt; Check that role/permission mappings are consistent across all layers
   (backend routes, frontend guards, database policies).
&lt;span class="p"&gt;6.&lt;/span&gt; Check that documentation reflects the current state of the code.

Report findings as: Bugs, Spec Drift, Gaps, Notes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Build the Workflow Habit
&lt;/h3&gt;

&lt;p&gt;Here's the rhythm that works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;During development&lt;/strong&gt; — build with your primary model. Let it write code, implement features, run tests. Don't interrupt the flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At natural checkpoints&lt;/strong&gt; — after completing a feature, finishing a spec, or before a PR:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new session (fresh context, no anchoring to prior reasoning)&lt;/li&gt;
&lt;li&gt;Switch to a different model if your IDE supports it&lt;/li&gt;
&lt;li&gt;Paste or activate the adversarial challenge prompt&lt;/li&gt;
&lt;li&gt;Point it at what you just built: "Review the auth flow in these files" or "Challenge the billing API design"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Weekly or before milestones&lt;/strong&gt; — run the objective audit checklist. This catches the slow drift that accumulates across sessions: a route middleware that doesn't match the spec, a documented service that was never created, a test suite with zero test files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Adapt to Your Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Kiro&lt;/strong&gt; — save both prompts as steering files with &lt;code&gt;inclusion: manual&lt;/code&gt;. Activate them with &lt;code&gt;#audit&lt;/code&gt; or &lt;code&gt;#adversarial-model-challenge&lt;/code&gt; in chat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code / Cursor / Copilot&lt;/strong&gt; — save them as markdown files in your repo (e.g., &lt;code&gt;.ai/prompts/audit.md&lt;/code&gt; and &lt;code&gt;.ai/prompts/challenge.md&lt;/code&gt;). Reference them in your prompt or paste them at the start of a review session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Any chat-based AI&lt;/strong&gt; — paste the prompt at the start of a new conversation. The key is a fresh session with no prior context from the build phase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD integration&lt;/strong&gt; — for the objective audit, you can automate parts of it. A GitHub Action that runs tests, counts files, and compares against a documented manifest catches structural drift without any AI involvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  The One Rule That Makes It Work
&lt;/h3&gt;

&lt;p&gt;The builder and the challenger must not share context. If the same model in the same session builds a feature and then reviews it, it will defend its own work. The value comes from a fresh perspective — a different model, a new session, or at minimum a completely different prompt framing that overrides the cooperative default.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your Workflow
&lt;/h2&gt;

&lt;p&gt;The developers in that Reddit thread learned something expensive: benchmark improvements don't guarantee trustworthiness in dynamic, real-world interactions. A model that scores higher on coding evaluations can still hallucinate with enough conviction to waste a day of your time.&lt;/p&gt;

&lt;p&gt;The adversarial model challenge isn't about distrusting AI. It's about applying the same engineering discipline to AI-assisted development that we've always applied to human-written code. Nobody ships code without review. Nobody deploys without tests. The model that writes your code shouldn't also be the only model that validates it.&lt;/p&gt;

&lt;p&gt;Build with your best model. Challenge with a different one. The bugs they catch won't be the same bugs — and that's exactly the point.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>claude</category>
      <category>devops</category>
    </item>
    <item>
      <title>Grab your products using the Shopify Storefront API</title>
      <dc:creator>Carlos de Santiago</dc:creator>
      <pubDate>Mon, 31 Jan 2022 04:25:22 +0000</pubDate>
      <link>https://forem.com/heyclos/grab-your-products-using-the-shopify-storefront-api-2k2</link>
      <guid>https://forem.com/heyclos/grab-your-products-using-the-shopify-storefront-api-2k2</guid>
      <description>&lt;p&gt;Cool so you've created a Shopify store, you have your API key and now you want to use it to create a call to see your products.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does the schema look like?! ¯|&lt;em&gt;(ツ)&lt;/em&gt;/¯&lt;/strong&gt;&lt;br&gt;
Shopify supposes that every developer is familiar with GraphQL syntax which is often not the case.&lt;/p&gt;

&lt;p&gt;Whenever you make a call you have to first define the query that will be passed to that call. This broke me, for hours. &lt;em&gt;HOURS.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These queries are simple but VERY specific.&lt;br&gt;
It is this specificity that made GraphQL so popular, very specific queries return only very specific data. This means smaller API responses, which is very friendly for users with data plan limitations or slower internet.&lt;/p&gt;

&lt;p&gt;This also means we can't just make a call that will return an object that we can log in the console to see what data is available to us. Uh oh.&lt;/p&gt;

&lt;p&gt;YOU HAVE TO DEFINE THE DATA YOU WANT IN THE QUERY FIRST, MEANING YOU HAVE TO KNOW THE SCHEMA BEFORE MAKING A CALL.&lt;/p&gt;
&lt;h2&gt;
  
  
  ¯|&lt;em&gt;(ツ)&lt;/em&gt;/¯
&lt;/h2&gt;

&lt;p&gt;The code below will grab the first 2 products from your store, so long as you have already created 2 products that are for sale on your store via the Shopify admin interface.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;shop_id&lt;/code&gt; is your storefront API key&lt;br&gt;
&lt;code&gt;client_id&lt;/code&gt; is your store url, for example '&lt;a href="https://my-store.myshopify.com"&gt;https://my-store.myshopify.com&lt;/a&gt;' just replace 'my-store' with your store name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{ products(first: 2) { edges { node { id title } } } }&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;apiCall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;shop_id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/api/2022-01/graphql.json`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/graphql&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;X-Shopify-Storefront-Access-Token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;client_id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;body&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;apiCall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at &lt;code&gt;query&lt;/code&gt; ....What is that? What is edges? What is node? Why is it structured this way? &lt;strong&gt;¯|&lt;em&gt;(ツ)&lt;/em&gt;/¯&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now products should be showing up on your console!&lt;/p&gt;

&lt;p&gt;...but if they are not...&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why are some(or all) products not showing up?!&lt;/strong&gt;&lt;br&gt;
It might be that the products that are not being returned aren't available on the relevant sales channel (your Storefront API channel). Lets fix that:&lt;/p&gt;

&lt;p&gt;Visit your products section at:&lt;br&gt;
'&lt;a href="https://my-store.myshopify.com/admin/products"&gt;https://my-store.myshopify.com/admin/products&lt;/a&gt;'&lt;br&gt;
replace 'my-store' with the name of your store.&lt;/p&gt;

&lt;p&gt;It gets pretty weird here so just bare with me:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IsN15bEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rl6faw2wd9r6d6zxdeot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IsN15bEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rl6faw2wd9r6d6zxdeot.png" alt="Image description" width="880" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click on &lt;em&gt;the checkbox&lt;/em&gt; next to the item(s) that isn't showing up, if you click on the item itself it will take you somewhere else (found this to be kind of insane) &lt;/li&gt;
&lt;li&gt;click the "edit products" after you check the box&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P-c6dcpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fapeazxzfju9fzgkbk6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P-c6dcpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fapeazxzfju9fzgkbk6v.png" alt="Image description" width="880" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click the add fields drop down at the top&lt;/li&gt;
&lt;li&gt;then in the "Sales channels" section pick all of the following:

&lt;ul&gt;
&lt;li&gt;Available to Shopify GraphiQL App&lt;/li&gt;
&lt;li&gt;Available to Online Store&lt;/li&gt;
&lt;li&gt;Available to Nextjs-connection
This will populate 3 columns on on your "edit products" page, the columns will each hold an unchecked box, go ahead and check these boxes.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gaaY9L_K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/za68yrdgm0tyy181t4ld.png" alt="Image description" width="880" height="294"&gt;
Then click "Save" at the top.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're now good to go! &lt;/p&gt;

&lt;p&gt;This is all feels very unintuitive and was a headache to figure out. I hope you're having an easier time than I am navigating the Shopify Storefront API, happy hacking!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>shopify</category>
      <category>store</category>
    </item>
  </channel>
</rss>
