<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amanda</title>
    <description>The latest articles on Forem by Amanda (@amandamartindev).</description>
    <link>https://forem.com/amandamartindev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amandamartindev"/>
    <language>en</language>
    <item>
      <title>Choosing a model means measuring cost vs quality on your data</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Sun, 22 Mar 2026 16:01:27 +0000</pubDate>
      <link>https://forem.com/amandamartindev/choosing-a-model-means-measuring-cost-vs-quality-on-your-data-1e58</link>
      <guid>https://forem.com/amandamartindev/choosing-a-model-means-measuring-cost-vs-quality-on-your-data-1e58</guid>
      <description>&lt;p&gt;I wanted to evaluate model-based extraction in a way that would tell me more than benchmarks alone. The scenario is building an AI recruiting agent to help match candidates to job postings. To do this, we need to ingest job postings from career pages, aggregators, social media posts, and other messy sources. Every posting needs to be parsed into structured JSON: title, company, salary range, requirements, benefits.&lt;/p&gt;

&lt;p&gt;I set up a comparison with a small dataset of 25 job postings across three model tiers to answer a practical question: does the quality difference between a more expensive model and a budget model justify the cost over time?&lt;/p&gt;

&lt;p&gt;All three models perform competitively on standard benchmarks, which is exactly why I couldn't rely on them to make this call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;For this exploration, I used &lt;a href="https://www.baseten.co/products/model-apis/" rel="noopener noreferrer"&gt;Baseten's Model APIs&lt;/a&gt;. You can use whatever model provider you like.&lt;/p&gt;

&lt;p&gt;I picked three models across the cost spectrum tiered by model positioning (priced March 2026):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Active Params&lt;/th&gt;
&lt;th&gt;~Input $/1M tokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;DeepSeek V3.1&lt;/td&gt;
&lt;td&gt;671B / 37B active&lt;/td&gt;
&lt;td&gt;$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mid-tier&lt;/td&gt;
&lt;td&gt;Nvidia Nemotron 3 Super&lt;/td&gt;
&lt;td&gt;120B / 12B active&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Budget&lt;/td&gt;
&lt;td&gt;OpenAI GPT-OSS-120B&lt;/td&gt;
&lt;td&gt;120B / 5.1B active&lt;/td&gt;
&lt;td&gt;$0.10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I generated a dataset of 25 job postings with Claude, designed to reflect the kinds of messy variation you see in real job posting data: informal listings, non-English postings, missing or no fields, hourly rates vs. annual, multiple currencies. For production, this type of data would likely come from multiple sources and be larger.&lt;/p&gt;

&lt;p&gt;The extraction prompt asks for valid JSON with ten fields: title, company, location, work model, salary min/max/currency, requirements, nice-to-haves, and benefits. Temperature is set to 0. For the purpose of this exploration, the same system prompt was used for the entire evaluation.&lt;/p&gt;

&lt;p&gt;For scoring, scalar fields (title, company, location, etc) are compared after normalization with exact match for strings and a 5% tolerance band for numbers. Array fields (requirements, nice-to-haves, benefits) are scored using exact normalized item matches and &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html" rel="noopener noreferrer"&gt;F1 score&lt;/a&gt;. The overall accuracy per posting is a weighted average across all fields, with title and requirements weighted highest because those matter most for this recruiting agent use case.&lt;/p&gt;

&lt;p&gt;This is a deliberately strict metric. For example, a model returning "senior engineer" instead of "senior software engineer" would not get full credit under this scorer even if a recruiter or downstream system might treat those as the same role family. This is a choice, however, and exact-match extraction accuracy is not the same thing as business usefulness.&lt;/p&gt;

&lt;p&gt;I included one reasoning model, Nemotron, because when you send a prompt to a reasoning model it will "think" first and that output is wrapped in think tags. This is something to consider when building your parser. DeepSeek V3.1 is technically a hybrid model that supports both thinking and non-thinking modes. I didn't specify, so it ran in the default mode (non-thinking).&lt;/p&gt;

&lt;p&gt;Example reasoning output might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;think&amp;gt;
The posting mentions "$150k - $180k" — I should normalize this to annual integers.
The location says "SF Bay Area" — should I interpret this as San Francisco?
The posting mentions "3 days in office" — this implies Hybrid, not On-site...
&amp;lt;/think&amp;gt;
{"title": "Senior Engineer", "company": "Acme Corp", ...}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reasoning models also affect cost because those thinking tokens count toward output. Across three runs, Nemotron averaged roughly 735 output tokens per call compared to 141 for DeepSeek and 481 for OpenAI, which is a big part of why it ended up as the most expensive option in this comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  The results
&lt;/h2&gt;

&lt;p&gt;I ran the comparison three times on the same dataset to account for variation in model runs. One clear pattern across runs is that DeepSeek was always first while GPT-OSS-120B and Nemotron were close with no clear winner for second place.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;DeepSeek&lt;/th&gt;
&lt;th&gt;Nemotron&lt;/th&gt;
&lt;th&gt;OpenAI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mean Accuracy Across 3 Runs&lt;/td&gt;
&lt;td&gt;74.0%&lt;/td&gt;
&lt;td&gt;69.6%&lt;/td&gt;
&lt;td&gt;70.4%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accuracy Range Across Runs&lt;/td&gt;
&lt;td&gt;74.0-74.0%&lt;/td&gt;
&lt;td&gt;68.5-70.6%&lt;/td&gt;
&lt;td&gt;70.1-70.6%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON Valid Rate&lt;/td&gt;
&lt;td&gt;25/25 in every run&lt;/td&gt;
&lt;td&gt;25/25 in every run&lt;/td&gt;
&lt;td&gt;25/25 in every run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avg Latency&lt;/td&gt;
&lt;td&gt;0.70s&lt;/td&gt;
&lt;td&gt;1.63s&lt;/td&gt;
&lt;td&gt;2.13s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avg Cost/Posting&lt;/td&gt;
&lt;td&gt;$0.00042&lt;/td&gt;
&lt;td&gt;$0.00068&lt;/td&gt;
&lt;td&gt;$0.00029&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Est. Cost/100K Posts&lt;/td&gt;
&lt;td&gt;$42.00&lt;/td&gt;
&lt;td&gt;$68.33&lt;/td&gt;
&lt;td&gt;$28.67&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;All three models produce valid JSON 100% of the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the models actually differ by field
&lt;/h2&gt;

&lt;p&gt;The aggregate scores tell a partial story. Here's the per-field breakdown averaged across all three runs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;DeepSeek&lt;/th&gt;
&lt;th&gt;Nemotron&lt;/th&gt;
&lt;th&gt;OpenAI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;title&lt;/td&gt;
&lt;td&gt;88.0%&lt;/td&gt;
&lt;td&gt;84.0%&lt;/td&gt;
&lt;td&gt;81.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;company&lt;/td&gt;
&lt;td&gt;80.0%&lt;/td&gt;
&lt;td&gt;76.0%&lt;/td&gt;
&lt;td&gt;80.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;location&lt;/td&gt;
&lt;td&gt;44.0%&lt;/td&gt;
&lt;td&gt;32.0%&lt;/td&gt;
&lt;td&gt;38.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;work_model&lt;/td&gt;
&lt;td&gt;77.3%&lt;/td&gt;
&lt;td&gt;74.7%&lt;/td&gt;
&lt;td&gt;72.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;salary_min&lt;/td&gt;
&lt;td&gt;82.0%&lt;/td&gt;
&lt;td&gt;80.7%&lt;/td&gt;
&lt;td&gt;80.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;salary_max&lt;/td&gt;
&lt;td&gt;84.0%&lt;/td&gt;
&lt;td&gt;80.7%&lt;/td&gt;
&lt;td&gt;80.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;salary_currency&lt;/td&gt;
&lt;td&gt;84.0%&lt;/td&gt;
&lt;td&gt;80.0%&lt;/td&gt;
&lt;td&gt;84.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;requirements&lt;/td&gt;
&lt;td&gt;54.6%&lt;/td&gt;
&lt;td&gt;53.0%&lt;/td&gt;
&lt;td&gt;50.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nice_to_have&lt;/td&gt;
&lt;td&gt;74.0%&lt;/td&gt;
&lt;td&gt;67.3%&lt;/td&gt;
&lt;td&gt;71.6%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;benefits&lt;/td&gt;
&lt;td&gt;80.3%&lt;/td&gt;
&lt;td&gt;75.0%&lt;/td&gt;
&lt;td&gt;75.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A few things stand out. Location was low for everyone, 32-44% across the board. These postings include things like "SF Bay Area," "remote (US only)," and locations in Portuguese, so that is not surprising. DeepSeek has a slight edge across many categories.&lt;/p&gt;

&lt;p&gt;Nemotron's weakest spots were location and requirements. While I don't know the single cause for this, it's a useful reminder that extra reasoning tokens do not automatically translate into better structured extraction.&lt;/p&gt;

&lt;p&gt;Requirements and location were difficult for all the models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Human review
&lt;/h2&gt;

&lt;p&gt;In general, automated scoring is not enough to confidently choose a model for your agent. How much you validate and against which fields will vary by use case. You may want to review all fields in a subset of data, or you may have one field that must be 100% correct and choose to audit that field across everything.&lt;/p&gt;

&lt;p&gt;Human review might reveal that your automated scoring weights don't reflect what actually matters for your use case.&lt;/p&gt;

&lt;p&gt;In my case, because this was a small exploratory dataset, I reviewed a subset of outputs outside the repo with extra attention on fields that scored lower, especially &lt;code&gt;work_model&lt;/code&gt; and &lt;code&gt;location&lt;/code&gt;. The repo is meant as a companion for readers to run themselves, not as a checked-in record of my manual review.&lt;/p&gt;

&lt;p&gt;A few interesting findings:&lt;/p&gt;

&lt;p&gt;When a posting did not name a real company in the main content, such as a recruiter email or something ambiguous like "stealth startup," all three models either left the company unresolved or returned placeholder-like values such as "Stealth Startup." That is probably the right behavior for a strict extraction pipeline, but it might not be the behavior we want.&lt;/p&gt;

&lt;p&gt;In a posting with dual currency salary bands, each model handled it differently. One took the first band, one mixed values across both bands, and one returned nothing. This could potentially be handled with different field design as I was only looking for salary min and max with no flexibility for the dual currency scenario.&lt;/p&gt;

&lt;p&gt;In listings with a specific city that did not state remote, in-office, or hybrid, all models tended to set &lt;code&gt;work_model&lt;/code&gt; to null. This is another example where whether or not this is acceptable is a product choice a human needs to make.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost at scale
&lt;/h2&gt;

&lt;p&gt;At 100K postings per month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High(DeepSeek V3.1):&lt;/strong&gt; ~$42/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mid-tier (Nemotron):&lt;/strong&gt; ~$69/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget (GPT-OSS-120B):&lt;/strong&gt; ~$29/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The budget model saves you $13/month over the higher cost model for roughly a 3.6-point accuracy drop. Nemotron costs more than both while scoring lower on average. The thinking tokens make it the worst value for this particular task.&lt;/p&gt;

&lt;p&gt;If we scale this to 1M postings, the spread becomes roughly $420 vs $683 vs $287 per month, which makes the cost penalty for the reasoning model more visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Ultimately this dataset is too small to come to a clear conclusion, but does offer some interesting areas to explore further against real data.&lt;/p&gt;

&lt;p&gt;For this use case of structured extraction from messy text at volume, the numbers make the OpenAI budget model worth exploring further. Across three runs it stayed fairly close to DeepSeek on aggregate score while remaining the cheapest option, which is still an attractive tradeoff.&lt;/p&gt;

&lt;p&gt;Now, you may be thinking about the latency of the budget model as it was consistently slower. This would matter more if this were user-facing and synchronous. In this case, there is no reason the end user needs to trigger extraction and wait on it directly, so batching is a reasonable fit.&lt;/p&gt;

&lt;p&gt;I would be more cautious about claiming OpenAI is definitively better than Nemotron on quality alone. Under this strict scorer, those two were close enough that second place flipped once across the three runs. Ultimately, I would still skip the reasoning model for this kind of extraction. Nemotron's thinking was sometimes useful extracting from ambiguous formatting, but for structured output the extra cost was not justified by the measured quality here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;I was definitely surprised by these findings and expected a more definitive "winner".&lt;/p&gt;

&lt;p&gt;Now, this exploration is 25 postings which is a small sample size. Given the strict scoring mehtod in a dataset this small, each mark swings the accuracy score more than in a larger dataset. Generated data also misses other things that you will find grabbing this same type of data from real world sources like unexpected artifacts. With more data, more runs, and a more rigorous "human in the loop" step, we would see something different.&lt;/p&gt;

&lt;p&gt;I also used the same system prompt for all models and test runs.  Prompt variations could impact results and are worth exploring.&lt;/p&gt;

&lt;p&gt;What you evaluate will depend on your final product as well. Your structured extraction problem might have different failure modes and need different scoring weights than mine.&lt;/p&gt;

&lt;p&gt;The main takeaway here is that benchmarks alone won't tell you which model handles your messy data the best. Build something against what your model really needs to perform well at and see what comes back.&lt;/p&gt;

&lt;p&gt;If you would like to run this analysis yourself, the project is &lt;a href="https://github.com/amandamartin-dev/model-comparison-extraction" rel="noopener noreferrer"&gt;hosted on Github&lt;/a&gt;. If you have questions or want to chat, please get in touch with me on &lt;a href="https://www.linkedin.com/in/amandamartin-dev/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; or &lt;a href="https://x.com/hey_amandam" rel="noopener noreferrer"&gt;X&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Why your solo agent workflow breaks down in a team build</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Sun, 15 Mar 2026 16:24:06 +0000</pubDate>
      <link>https://forem.com/amandamartindev/why-your-solo-agent-workflow-breaks-down-in-a-team-build-k1m</link>
      <guid>https://forem.com/amandamartindev/why-your-solo-agent-workflow-breaks-down-in-a-team-build-k1m</guid>
      <description>&lt;p&gt;As I have been learning and building with AI, most of my discoveries and improvements have been around the concept of solo developer + agent.&lt;/p&gt;

&lt;p&gt;Recently I had the opportunity to work with a small team on a new build and brought the approach that works for me to the team. While my flow worked, it blocked the team for too long and wasted precious time that my teammates could have spent building more meaningfully.&lt;/p&gt;

&lt;p&gt;Here I want to explore the approach that works well for me solo and what I think changes when you apply that same approach to a team on a greenfield build.&lt;/p&gt;

&lt;p&gt;I am not presenting this as the correct workflow. This is more an exploration of a failure mode I hit, the tradeoff I now see more clearly, and the experiment I would run differently next time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: You will notice reading this that I do not share the specific build. This project was for a CodeTV episode that has not yet been released. I will update this once it is out.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I build
&lt;/h2&gt;

&lt;p&gt;When I start a new project with an agent, currently my focus is spending most of the time writing out a plan for the agent to follow up front. Something that gets me to a working application within the boundaries I set that I can then refine.&lt;/p&gt;

&lt;p&gt;In essence I create a detailed plan across a few files for the agent to follow: a spec, a runbook, and a checklist, with an &lt;code&gt;AGENTS.md&lt;/code&gt; that functions more as the high level order and file map. It covers scope, architecture, features, tech choices, and what is out of scope.&lt;/p&gt;

&lt;p&gt;The goal is simple. Give the agent enough structure to move with confidence and produce a coherent first pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened when I applied this to a team
&lt;/h2&gt;

&lt;p&gt;Naturally I brought this approach to my team with the theory that we could give the agent our requirements, have it spin us up a working starting point, and then divide and iterate from there.&lt;/p&gt;

&lt;p&gt;We had a 6 hour build time and this felt like a good way to get the boring scaffolding out of the way so we could all apply our expertise and creativity where it mattered.&lt;/p&gt;

&lt;p&gt;As a team we briefly reviewed the spec docs to make sure we agreed on the decisions and I gave it to the agent to build. In this case I used Codex 5.4.&lt;/p&gt;

&lt;p&gt;Did it work? Sort of.&lt;/p&gt;

&lt;p&gt;The result was a fully working baseline. It wasn't pretty but, it met the requirements and the logical flows needed very little refining. We only replaced one technical decision, not the agent's fault in this case, but one area where we needed to change direction. At this point my team focused on UI/UX and adding some creative flows.&lt;/p&gt;

&lt;p&gt;The problem was that the first pass took over an hour and for a 6 hour build that meant the rest of the team was effectively blocked for a meaningful portion of the project timeline.&lt;/p&gt;

&lt;p&gt;This is what failed. The agent produced good work, the spec was fine but the artifact created was too complete when we could have moved with a more minimal scaffold.&lt;/p&gt;

&lt;p&gt;By the time we had the baseline we had lost too much of the build window for refinement, UI/UX creativity, and the kind of decisions we need humans to be able to make early.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this breaks in a team setting
&lt;/h2&gt;

&lt;p&gt;As a solo developer, a highly complete first pass is often useful. You are optimizing for coherence and momentum.&lt;/p&gt;

&lt;p&gt;On a team build especialy with a compressed timeline that same instinct can backfire.&lt;/p&gt;

&lt;p&gt;What I didn't consider is that the first artifact does not just need to work or be super complete, it simply needed to give other people something stable to build against immediately.&lt;/p&gt;

&lt;p&gt;A strong spec and a strong team start are not the same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I would try next time
&lt;/h2&gt;

&lt;p&gt;If that first pass had taken 5 to 10 minutes, I would probably still think this was a great workflow.&lt;/p&gt;

&lt;p&gt;But once the first pass starts taking a meaningful chunk of the total project time, the question changes:&lt;/p&gt;

&lt;p&gt;What should an agent build first so the rest of the team can start contributing right away?&lt;/p&gt;

&lt;p&gt;For a short build, I do not think the answer is "the whole MVP."&lt;/p&gt;

&lt;p&gt;I think it is something closer to a shared scaffold that gives the team a coordination surface after the first agent pass.&lt;/p&gt;

&lt;p&gt;This could include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;app shell&lt;/li&gt;
&lt;li&gt;route structure&lt;/li&gt;
&lt;li&gt;shared types&lt;/li&gt;
&lt;li&gt;schema or model shapes&lt;/li&gt;
&lt;li&gt;API contracts&lt;/li&gt;
&lt;li&gt;page shells&lt;/li&gt;
&lt;li&gt;placeholder states&lt;/li&gt;
&lt;li&gt;mocked data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Jusr enough structure that people can work without inventing their own versions of the system. As we've all seen, an agent given the same instructions can produce different results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instead of this
&lt;/h3&gt;

&lt;p&gt;Team writes spec  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one agent builds the full working MVP
&lt;/li&gt;
&lt;li&gt;everyone else waits or does non-code tasks &lt;/li&gt;
&lt;li&gt;team reviews and divides the remaining work &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Perhaps it looks like this
&lt;/h3&gt;

&lt;p&gt;Team writes spec  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one agent builds the minimum scaffold
&lt;/li&gt;
&lt;li&gt;team splits work immediately
&lt;/li&gt;
&lt;li&gt;one stream pushes logic, one stream pushes UI/UX &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a little more detail, the division of work becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one agent generates the shared scaffold&lt;/li&gt;
&lt;li&gt;one human + agent builds the core interaction flow against mocked data&lt;/li&gt;
&lt;li&gt;one human + agent works UI/UX and page feel&lt;/li&gt;
&lt;li&gt;one human + agent focuses on integration edges like auth, persistence, and deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our case the point of failure was that the first pass already included things like fully written API routes, test suites, authentication, notifications, and almost every requirement needed for the final product.&lt;/p&gt;

&lt;p&gt;All examples of things that should have come later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I took away from it
&lt;/h2&gt;

&lt;p&gt;I had spent so much time optimizing as a solo developer that I did not have the right mental model for a team build.&lt;/p&gt;

&lt;p&gt;For a short build like ours, my thinking has definitely changed.&lt;/p&gt;

&lt;p&gt;The first generated artifact should optimize for parallel human contribution, not maximum autonomous completeness.&lt;/p&gt;

&lt;p&gt;That sounds obvious in hindsight, but it does not always feel obvious when you are staring at a capable model and a good spec and thinking "why not just have it build the whole thing now?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;I do not think I have the perfect answer for what workflow is best in every scenario.&lt;/p&gt;

&lt;p&gt;There will be projects where one developer plus agents gets to a strong MVP so quickly that bringing the rest of the team in later is the right move.&lt;/p&gt;

&lt;p&gt;There will also be projects where everyone should start moving early, but against minimal shared artifacts like types and page shells instead of a full application.&lt;/p&gt;

&lt;p&gt;What I am convinced of after this experience is that this should be a deliberate choice based on the constraints of the team, the project, and the shared goals you have.&lt;/p&gt;

&lt;p&gt;Just because your agent can build the whole working application does not mean that is the best place to start.&lt;/p&gt;

&lt;p&gt;A better question to ask might be...&lt;/p&gt;

&lt;p&gt;What is the smallest useful thing we should generate first so that nobody spends the best part of the build waiting?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Catching agent repo drift before evals</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Thu, 05 Mar 2026 16:14:03 +0000</pubDate>
      <link>https://forem.com/amandamartindev/catching-agent-repo-drift-before-evals-4889</link>
      <guid>https://forem.com/amandamartindev/catching-agent-repo-drift-before-evals-4889</guid>
      <description>&lt;p&gt;After covering basic &lt;a href="https://dev.to/amandamartindev/practical-linting-for-agent-context-files-322h"&gt;linting checks in my previous post&lt;/a&gt;, there is another layer worth adding before the more costly behavioral evals.&lt;/p&gt;

&lt;p&gt;You can catch repo drift and convention violations with deterministic checks before paying for slower behavioral eval runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reference integrity check
&lt;/h3&gt;

&lt;p&gt;One place to start is reference integrity, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;referenced code still exists&lt;/li&gt;
&lt;li&gt;referenced code still contains real implementation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# reference integrity&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$resolved_ref&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
    &lt;span class="k"&gt;*&lt;/span&gt;.ts|&lt;span class="k"&gt;*&lt;/span&gt;.js|&lt;span class="k"&gt;*&lt;/span&gt;.tsx|&lt;span class="k"&gt;*&lt;/span&gt;.jsx&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s1"&gt;'^\s*(export\s+)?(async\s+)?(function|class|interface|type|const|let|var|enum)\s'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$resolved_ref&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log_pass &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$relctx&lt;/span&gt;&lt;span class="s2"&gt; -&amp;gt; &lt;/span&gt;&lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="s2"&gt; exists and has declarations"&lt;/span&gt;
      &lt;span class="k"&gt;else
        &lt;/span&gt;log_error &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$relctx&lt;/span&gt;&lt;span class="s2"&gt; -&amp;gt; &lt;/span&gt;&lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="s2"&gt; exists but has no clear declarations"&lt;/span&gt;
      &lt;span class="k"&gt;fi&lt;/span&gt;
      &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="k"&gt;esac&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_warn &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$relctx&lt;/span&gt;&lt;span class="s2"&gt; references '&lt;/span&gt;&lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="s2"&gt;' which no longer exists"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is useful because files can still exist after a refactor, but stop being useful references for your agent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture drift checks
&lt;/h3&gt;

&lt;p&gt;Agents tend to crawl directory structure to infer where new code should be placed. If the architecture rules drift, the agent may generate code in the wrong location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# architecture rule in this repo: routes should live in src/routes&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/routes"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"src/routes exists"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_fail &lt;span class="s2"&gt;"Missing src/routes"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# architecture rule:  import from ./routes, not ./route&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"from ['&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;./routes/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/app.ts"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"app.ts imports routes from ./routes"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_fail &lt;span class="s2"&gt;"app.ts route imports do not match ./routes architecture"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Instruction drift checks
&lt;/h3&gt;

&lt;p&gt;This example check is looking for contradictory guidance which can confuse agents depending on prompt interpretation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ensure AGENTS.md route guidance matches repo structure&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"src/routes"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/AGENTS.md"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/routes"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_warn &lt;span class="s2"&gt;"AGENTS.md references src/routes but the directory no longer exists"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"Route guidance matches repo structure"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deterministic Anti-Pattern Checks
&lt;/h2&gt;

&lt;p&gt;I think we've all seen coding agents clearly favor certain frameworks based on training data. You can use these types of checks to enforce project conventions especially in cases of common violations.&lt;/p&gt;

&lt;p&gt;Some examples of patterns could be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no raw &lt;code&gt;try/catch&lt;/code&gt; in service business logic&lt;/li&gt;
&lt;li&gt;no NestJS decorators in an Express codebase&lt;/li&gt;
&lt;li&gt;no &lt;code&gt;chai&lt;/code&gt; in tests where Jest is the standard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This check will obviously vary greatly by project, but here are some example code snippets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# no try/catch in services&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'try\s*\{'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/services/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_fail &lt;span class="s2"&gt;"Service files contain try/catch blocks (should use Result&amp;lt;T&amp;gt;)"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"No try/catch in service files"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# no chai in tests&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"from ['&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]chai['&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/test/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_fail &lt;span class="s2"&gt;"Test files import chai, should use Jest"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"Test files don't use chai"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# no NestJS decorators in routes/services&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'@(Controller|Get|Post|Injectable)'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/routes/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/src/services/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;log_fail &lt;span class="s2"&gt;"Found NestJS decorators in Express layers"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;log_pass &lt;span class="s2"&gt;"No NestJS decorator drift"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These checks enforce repo contracts before you run behavioral LLM-as-judge evals.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to add an eval platform
&lt;/h2&gt;

&lt;p&gt;This post is intentionally script-first and focused on things that can be checked deterministically in your environment. &lt;/p&gt;

&lt;p&gt;Capability drift is different and needed to measure if the agent is getting better or worse over time. This gets into behavioral evals and deserves its own exploration.&lt;/p&gt;

&lt;p&gt;Regardless, keep your more deterministic repo checks. They are a low lift and valuable!&lt;/p&gt;

&lt;p&gt;I'd love to hear more about how you are approaching checks and evals in your projects. Leave me a comment below.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Practical linting for agent context files</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Mon, 02 Mar 2026 15:00:33 +0000</pubDate>
      <link>https://forem.com/amandamartindev/practical-linting-for-agent-context-files-322h</link>
      <guid>https://forem.com/amandamartindev/practical-linting-for-agent-context-files-322h</guid>
      <description>&lt;p&gt;As more and more developers are adding agent context files, skills, AI PR review, and other AI indicators to their repos I noticed the same questions coming up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do I test these files?&lt;/li&gt;
&lt;li&gt;How do I know my updates are meaningful?&lt;/li&gt;
&lt;li&gt;How do I measure impact?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your agent treats a context file like the truth so drift and inconsistency over time can introduce unexpected behavior. The non-deterministic nature of agents can feel intimidating, but before you even get to writing behavioral checks there are some simple automatic and deterministic things you can do that are already familiar.&lt;/p&gt;

&lt;h2&gt;
  
  
  How agent context linting is different
&lt;/h2&gt;

&lt;p&gt;Traditional linting might check syntax and style answering the question "Is this valid JavaScript?" but linting for your context files is answering different questions.&lt;/p&gt;

&lt;p&gt;Some examples could be: &lt;br&gt;
Is this guidance specific enough for a model to follow?&lt;br&gt;
Are the file references up to date?&lt;br&gt;
Are my rules and terminology consistent throughout context files?&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Structural Checks
&lt;/h2&gt;

&lt;p&gt;Quick checks for basic structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;minimum or maximum line and word count (as a proxy for token consumption). This is important as agents need enough information to do the job while not bloating the context window.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: fail if AGENTS.md is too long for your standards&lt;/span&gt;

&lt;span class="nv"&gt;max_lines&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300
&lt;span class="nv"&gt;line_count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &amp;lt; AGENTS.md | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$line_count&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-gt&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$max_lines&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: AGENTS.md is &lt;/span&gt;&lt;span class="nv"&gt;$line_count&lt;/span&gt;&lt;span class="s2"&gt; lines (max &lt;/span&gt;&lt;span class="nv"&gt;$max_lines&lt;/span&gt;&lt;span class="s2"&gt;)."&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Suggestion: move detailed procedures into focused docs and reference them from AGENTS.md."&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Example split: docs/review-checklist.md, docs/testing-standards.md"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;2
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;check for required frontmatter in skills.md
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: required frontmatter in SKILL.md&lt;/span&gt;
&lt;span class="c"&gt;#replace .goose with your agent&lt;/span&gt;

&lt;span class="nv"&gt;skill_file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;".goose/skills/context-eval/SKILL.md"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"^---"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: &lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt; missing YAML frontmatter"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;2
&lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"^name:"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: &lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt; missing required 'name' field"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;2
&lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"^description:"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: &lt;/span&gt;&lt;span class="nv"&gt;$skill_file&lt;/span&gt;&lt;span class="s2"&gt; missing required 'description' field"&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;2
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"PASS: frontmatter check"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other checks could be no TODO markers that may confuse agents and basic checks that any file references still exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instruction Quality Checks
&lt;/h2&gt;

&lt;p&gt;Generic phrases like "follow best practices" are not useful to your agents so flagging weak wording patterns to ensure concrete instructions is another strategy. &lt;/p&gt;

&lt;p&gt;This can be accomplished in your own scripting, but here you may want to consider other tooling meant for validating against prose like &lt;a href="https://vale.sh/" rel="noopener noreferrer"&gt;Vale&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vale is a prose linter similar to ESLint, but for natural language in your markdown and context files. You define patterns and configure Vale to scan and report severity as a &lt;code&gt;suggestion&lt;/code&gt;, &lt;code&gt;warning&lt;/code&gt;, or &lt;code&gt;error&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For agent context, this is useful for checking non-deterministic language that agents don't work well with versus file structure.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: "Follow best practices for error handling."&lt;/li&gt;
&lt;li&gt;After: "In &lt;code&gt;src/services/&lt;/code&gt;, use &lt;code&gt;Result&amp;lt;T&amp;gt;&lt;/code&gt; from &lt;code&gt;src/common/result.ts&lt;/code&gt; and avoid raw &lt;code&gt;try/catch&lt;/code&gt; for business logic."
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# .vale.ini
&lt;/span&gt;&lt;span class="py"&gt;StylesPath&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;styles&lt;/span&gt;
&lt;span class="py"&gt;MinAlertLevel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;

&lt;span class="nn"&gt;[*.md]&lt;/span&gt;
&lt;span class="py"&gt;BasedOnStyles&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;ContextQuality&lt;/span&gt;
&lt;span class="py"&gt;ContextQuality.WeakReviewVerbs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;NO&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# styles/ContextQuality/GenericPhrases.yml&lt;/span&gt;
&lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;existence&lt;/span&gt;
&lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Replace&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;generic&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;phrase&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'%s'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;with&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;project-specific&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;guidance."&lt;/span&gt;
&lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
&lt;span class="na"&gt;ignorecase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;tokens&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;be helpful&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;follow best practices&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;be concise&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;Linting can catch some obvious quality issues early and is quick to set up. This can run on every PR while your more costly behavioral evaluations can run nightly (or whatever cadence makes sense for your project).&lt;/p&gt;

&lt;p&gt;Remember linting for agents has similar expectations to linting your code. You can expect fast feedback on edits, fewer instruction quality issues, and less drift. You should not rely on linting to predict agent behavior.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Vibe coding as a love language</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Sun, 18 Jan 2026 13:58:57 +0000</pubDate>
      <link>https://forem.com/amandamartindev/vibe-coding-as-a-love-language-4321</link>
      <guid>https://forem.com/amandamartindev/vibe-coding-as-a-love-language-4321</guid>
      <description>&lt;p&gt;I never used to be a person who coded on the weekends.&lt;/p&gt;

&lt;p&gt;Not because I didn't have ideas, but because it felt unfair to my partner to sit holed up at my desk hacking away all weekend instead of spending time with him. &lt;/p&gt;

&lt;p&gt;Coding with AI tooling has enabled me to scratch the itch of creating silly things with no business value just for fun - and be able to share that experience because the barrier to add value is just natural language and now creating a program can be a fun game or puzzle we do together.&lt;/p&gt;

&lt;p&gt;This weekend I wanted to recreate the viral speed reading video as an application that pulled in unique articles but also tests you on reading comprehension.  I wanted to do this because after I took that test I felt like I was the fastest reader ever! But no clue if I knew what I just read.&lt;/p&gt;

&lt;p&gt;Since I wanted to do this with my husband and he has zero technical background (or interest), I started with &lt;a href="https://block.github.io/goose/docs/quickstart/" rel="noopener noreferrer"&gt;goose desktop&lt;/a&gt;. Together we crafted the initial prompt around the idea.  &lt;/p&gt;

&lt;p&gt;This was a very different way of working for me.  No real plan, I didn't specify the tech.  It was about having fun and making something together.&lt;/p&gt;

&lt;p&gt;We started very simple...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09csdb3ec2n2ohukcfil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09csdb3ec2n2ohukcfil.png" alt="goose desktop" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;...with this prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to make a website based on the viral speed reading test. This site should pick a wikipedia article at random for the test and have an input at the end where the player has to put in what they understood about what they just read. The site should then be able to judge the reading comprehension. Help me design a plan to implement this. The first step will be understanding what the viral speed reading test is so you will need to research that
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We used Sonnet 4.5 as there was no reason to call on anything more robust here.  Not surprisingly, the model chose to create a NextJS application. React is king - no argument from me!&lt;/p&gt;

&lt;p&gt;Once my husband saw it working, he had ideas! He is a comms professional with a background in journalism so immediately wanted the font to be more readable and fix that the red letter focal point so it wouldn't bounce around as much.&lt;/p&gt;

&lt;p&gt;He was pointing at the screen and telling me what to tell goose to do. We iterated a bit together like this and ended up with a great starting point.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dmc1t2r2j5cb9vapmj3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dmc1t2r2j5cb9vapmj3.gif" alt="starting the speed reading test" width="1096" height="718"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wanted to add a way to pull a fresh article and a loading state in case the first article returned wasn't long enough. So I added that next.&lt;/p&gt;

&lt;p&gt;Here's the test and restart flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjri3ydg6bxqxl70ojql.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjri3ydg6bxqxl70ojql.gif" alt="test after reading" width="560" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this was truly a for fun exercise, it was also interesting for me to see how someone very far removed from any tech understanding use a tool like this and how he problem solved. He was more likely to describe how he wanted the application to feel or focus on the design. Very different from the developers I focus on for work!&lt;/p&gt;

&lt;p&gt;As a developer, it was definitely hard for me to walk away from the "finished" project.  I could see the rough edges and started thinking about user experience issues - what happens if there's network issues, what happens if it takes 20 tries to find something long enough, how should it handle duplicate content, does it work on mobile....on and on...&lt;/p&gt;

&lt;p&gt;At the end of the day though, this wasn't about making a polished product. This was about a new world of fun that you can share with anyone, even your most non-technical friend or partner. Perhaps the future of social interactions includes ephemeral games we spin up and play over an evening together.  &lt;/p&gt;

&lt;p&gt;There is a lot of hate in the world for AI and lots of valid points to explore there but in many ways this technology can bring us closer to people we love and find new ways to connect with them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>webdev</category>
      <category>goose</category>
    </item>
    <item>
      <title>Dynamic MCP Server discovery with goose</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Tue, 13 Jan 2026 13:55:00 +0000</pubDate>
      <link>https://forem.com/amandamartindev/dynamic-mcp-server-discovery-with-goose-3m41</link>
      <guid>https://forem.com/amandamartindev/dynamic-mcp-server-discovery-with-goose-3m41</guid>
      <description>&lt;p&gt;A common sentiment I hear about MCP servers is that you have to have them all enabled prior to your session to be able to use them and that's wasteful.&lt;/p&gt;

&lt;p&gt;I realized in talking to some engineers I work with that some of the options to dynamically use MCP Serves might not be known to everyone.&lt;/p&gt;

&lt;p&gt;I often use the goose agent internally and a feature that is very useful for anyone with multiple MCP Servers is the &lt;a href="https://block.github.io/goose/docs/mcp/extension-manager-mcp/" rel="noopener noreferrer"&gt;Extensions Manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This tool allows goose to search for and enable relevant MCP Servers depending on your needs. This works even for servers that require auth as long as you have configured them.&lt;/p&gt;

&lt;p&gt;In this example I have the Github MCP Server configured with my required PAT token and disabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezur15aycq2vkguq574s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezur15aycq2vkguq574s.png" alt="goose extensions dashboard" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then ask goose to complete a task that requires Github. The first step goose takes is to see if there is an MCP tool available to help.  Due to Github being an authed server, I have to confirm a second time and then goose can run with my task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj5kg4dtiix28us9n8ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj5kg4dtiix28us9n8ew.png" alt=" " width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I used goose dektop for visuals here, you can use this in the terminal if that's where you like to work.&lt;/p&gt;

&lt;p&gt;Check it out and let me know other ways you are using MCP&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>agents</category>
    </item>
    <item>
      <title>How I added experimental MCP Apps support to Apollo MCP Server with Goose</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Sat, 10 Jan 2026 21:27:22 +0000</pubDate>
      <link>https://forem.com/amandamartindev/how-i-added-experimental-mcp-apps-support-to-apollo-mcp-server-with-goose-2hjl</link>
      <guid>https://forem.com/amandamartindev/how-i-added-experimental-mcp-apps-support-to-apollo-mcp-server-with-goose-2hjl</guid>
      <description>&lt;p&gt;The challenge: contribute to my companies codebase in a language I do not write using an agentic dev workflow.&lt;/p&gt;

&lt;p&gt;As a developer advocate, I'm often creating demo and educational code versus contributing to our product codebases. Over the holiday break I wanted to challenge myself to fully lean into agentic coding by adding experimental support for MCP Apps draft spec to the Apollo MCP Server (&lt;a href="https://github.com/amandamartin-dev/apollo-mcp-server-testing" rel="noopener noreferrer"&gt;my experimental repo here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;It was time to test everything I teach. in a real world scenario in a language I do not write...Rust. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.apollographql.com/docs/apollo-mcp-server" rel="noopener noreferrer"&gt;Apollo MCP Server&lt;/a&gt; is open source and is used as a way to expose GraphQL operations as MCP tools without writing any additional code. We recently added support for the OpenAI Apps SDK so evolving into MCP UI apps was a great next step to challenge myself on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraints
&lt;/h2&gt;

&lt;p&gt;The agent had to add experimental support for MCP Apps without breaking existing functionality or interfering with the OpenAI Apps implementation. To validate my changes, I also needed a separate repository with a prototype MCP UI App for my Luma Community Analytics tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tools
&lt;/h3&gt;

&lt;p&gt;These are the tools and techniques I leveraged to make this a successful build.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://block.github.io/goose/docs/quickstart" rel="noopener noreferrer"&gt;goose CLI and Desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Markdown files to provide instructions and important information to the agent including MCP Spec details and general project goals and rules&lt;/li&gt;
&lt;li&gt;Apollo MCP Server&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mcpjam.com/" rel="noopener noreferrer"&gt;MCP Jam for testing&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Build
&lt;/h2&gt;

&lt;p&gt;I needed to work with the agent to get to a point where we had a flow that  followed a research/plan → code in chunks → test → report deviations → repeat process. This was the core idea I had to make this a successful test.&lt;/p&gt;

&lt;p&gt;My research started with a simple prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"research all files in the apollo-mcp-server local repo to understand the OpenAI apps implementation and the details of the draft MCP UI Apps spec. Then create a plan to add expirimental support for MCP apps that preserves the open ai SDK version"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wasn't a magical plan that was perfect the first time. Iteration on this plan was the most "human in the loop" work that I completed for this project. In my experience spending time on a planning phase is very important when working in an existing codebase especially in the case of adding a new/experimental feature that should not impact the rest of the project.&lt;/p&gt;

&lt;p&gt;One of the features of my plan that increased the success of my agent was to have each chunk of the plan followed by full testing of the code. This way, errors were caught along the way and fixed rather than having a mess at the end. I also requested that the final report output have any details where the agent needed to deviate from the plan based on errors or other information.&lt;/p&gt;

&lt;p&gt;Once it was complete, I built the rust binary and added it to an MCP Apps project for testing.&lt;/p&gt;

&lt;p&gt;Did it work the first time? No.&lt;/p&gt;

&lt;p&gt;However, I was able to quickly debug with goose to find the few loose ends that did not get caught such as a query parameter used in our OpenAI SDK version that was not actually MCP App compliant but didn't flag on a test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gykguepsyl3dqknz4he.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gykguepsyl3dqknz4he.png" alt="the offending query param code" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the app working, I was able to do a preliminary test in MCP Jam which is an alternative to the MCP Inspector. It's a great experience and the regular inspector does not currently support apps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ypiosc0yjll3y0jkb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ypiosc0yjll3y0jkb1.png" alt="initial load test in mcp jam" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point... I had to wait.  Remember &lt;a href="https://github.com/modelcontextprotocol/modelcontextprotocol/pull/1865" rel="noopener noreferrer"&gt;MCP Apps is in draft&lt;/a&gt; and the agent ecosystem hadn't begun to support it yet. The Goose team at this point was almost there but it was the holiday break so I actually shut my laptop. (go touch grass?)&lt;/p&gt;

&lt;h2&gt;
  
  
  The final test!
&lt;/h2&gt;

&lt;p&gt;Goose team released draft spec support in early January in v1.19.0! I immediately rushed to test in the Goose desktop agent but ran into a few issues.  &lt;/p&gt;

&lt;p&gt;Back to goose again to debug!&lt;/p&gt;

&lt;p&gt;Goose helped me discover 2 small bugs in the early release of MCP Apps support in goose and I was able to report quickly to the team so they could &lt;a href="https://github.com/block/goose/releases/tag/v1.19.1" rel="noopener noreferrer"&gt;quickly release a patch&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You read that right. I made goose debug goose!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu98s5iz06nq5rvgxwvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu98s5iz06nq5rvgxwvt.png" alt="spiderman pointing as himself meme" width="567" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ultimately I ended up with the early prototype of my Luma events dashboard which will be a great tool for folks internally here at Apollo to understand community metrics for events I host.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6nkkjgowp2rb8pjfuxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6nkkjgowp2rb8pjfuxd.png" alt="Screenshot of goose desktop with a running mcp app" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Learnings
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;You don't have to be an expert in a language to contribute a meaningful feature to a codebase&lt;/li&gt;
&lt;li&gt;Plan, Iterate, Plan&lt;/li&gt;
&lt;li&gt;Agents working in small chunks and always testing means they can catch errors early, fix and move on while you grab a coffee&lt;/li&gt;
&lt;li&gt;Is perfection the goal? In my experience, not yet but I was able to move so much faster as a developer. The server updates and new app were completed in half a day with me starting from zero knowledge of the draft spec or codebase.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Note: After I completed this I saw a post by Angie Jones for a Research -&amp;gt; Plan -&amp;gt; Implement flow with recipes that she released over the holiday break too.  If I were starting this flow again today, this is where I would start.  &lt;a href="https://block.github.io/goose/docs/tutorials/rpi/" rel="noopener noreferrer"&gt;Check the docs here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Can you do this? Yes! You can test out my &lt;a href="https://github.com/amandamartin-dev/luma-analytics-mcp-app/tree/main" rel="noopener noreferrer"&gt;Luma Analytics MCP App Protoype&lt;/a&gt; or build your own. I'd love to hear more about how you leverage agentic coding in the comments.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Meet the Builders: Highlights from the MCP Server Builder Meetup</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Tue, 08 Jul 2025 16:33:39 +0000</pubDate>
      <link>https://forem.com/apollographql/meet-the-builders-highlights-from-the-mcp-server-builder-meetup-5821</link>
      <guid>https://forem.com/apollographql/meet-the-builders-highlights-from-the-mcp-server-builder-meetup-5821</guid>
      <description>&lt;p&gt;On June 18th, we hosted our very first MCP Server Builder Meetup in San Francisco, bringing together engineers, tinkerers, and early adopters to explore the future of building AI-native developer experiences with Model Context Protocol (MCP).&lt;/p&gt;

&lt;p&gt;This was more than just a showcase of cool demos. It marked the launch of a new community of developers building for and with LLMs using MCP servers. The energy in the room made one thing clear: this community is ready to build what's next.&lt;/p&gt;

&lt;p&gt;Keep reading for a recap of all the demos with links to the full sessions.  Our next events in the series are on July 29th in NYC and July 31st in San Francisco.  We hope to see you there. RSVP and subscribe in our &lt;a href="https://lu.ma/mcp-server" rel="noopener noreferrer"&gt;Luma calendar&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jordan Bergero (Block): Building Reliable MCP Servers with Goose &amp;amp; MCP Tool Layering in Square MCP Server
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhztmsfjb35nqhkd5p1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhztmsfjb35nqhkd5p1f.png" alt="Jordan bergero of block" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jordan shared how his team turned an internal hack week project into a production-grade MCP server at Square, which now powers real developer workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduced Goose: an open source, LLM-agnostic tool for running and testing MCP servers locally in a GUI or terminal experience.&lt;/li&gt;
&lt;li&gt;Explained Square’s "layered approach" (discover → plan → execute) to reduce API confusion. This structured layering helps make results predictable and reproducible.&lt;/li&gt;
&lt;li&gt;Goose read a raw .txt file of invoice notes, parsed out customer and payment data, called multiple Square API endpoints (customer create, order create, invoice publish), and emailed real invoice all from a single prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; An impressive real-world example of turning API chaos into structured tool behavior. &lt;/p&gt;

&lt;p&gt;Check out Jordan’s &lt;a href="https://youtu.be/tA0-xf85FuM?feature=shared" rel="noopener noreferrer"&gt;full session on YouTube&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/jordan-bergero" rel="noopener noreferrer"&gt;connect with them on LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Melissa Herrera (Langflow): Langflow as a Visual MCP Client and Server
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ew0wxgb9iusldb7btr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ew0wxgb9iusldb7btr.png" alt="Melissa Hererra" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
Melissa brought the energy and delivered a lightning-fast tour of Langflow, a visual IDE for building agent workflows with drag-and-drop ease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP Client &amp;amp; Server: Langflow can act as both an MCP client (calling tools via agent workflows) and a server (exposing your flows as tools), letting you chain, compose, and republish.&lt;/li&gt;
&lt;li&gt;Multi-agent architecture: She showcased a resume enhancer app that parsed a resume, queried live job market data via Tavily, and returned improvements all orchestrated across multiple agents.&lt;/li&gt;
&lt;li&gt;Tool Reuse: Demonstrated turning any Langflow component into a reusable tool and exporting it as part of a server bundle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; For devs or visual thinkers, Langflow is a developer-friendly launchpad for building composable, LLM-native tools.&lt;/p&gt;

&lt;p&gt;Check out Melissa's &lt;a href="https://youtu.be/EOXPaLF8_gw?feature=shared" rel="noopener noreferrer"&gt;full session on YouTube&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/herrera-melissa" rel="noopener noreferrer"&gt;connect with them on LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lizzie Siegle (Cloudflare): Podcast Generator with Workers + MCP
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.15.52%25E2%2580%25AFPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.15.52%25E2%2580%25AFPM.png" alt="Lizzie Siegle" width="800" height="446"&gt;&lt;/a&gt;&lt;br&gt;
Lizzie brought some joy (and a surprise) with her talk on building and deploying fun, voice-powered MCP apps on Cloudflare Workers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serverless podcast generator: Combined Claude, Workers AI, and a Cloudflare D1 SQL database to build a podcast generator that outputs both audio and script content.&lt;/li&gt;
&lt;li&gt;End-to-end deployment: Demonstrated how to go from zero to a deployed MCP server using Cloudflare’s click-to-deploy button. Most code was auto-generated, including durable object support.&lt;/li&gt;
&lt;li&gt;Tool listing &amp;amp; persistence: Saved generated podcast metadata and audio URLs to SQL, and exposed a tool to query prior results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Cloudflare’s infra stack is well-suited for lightweight AI applications. If you want a fast way to host, persist, and serve LLM workflows globally, this is a great blueprint.&lt;/p&gt;

&lt;p&gt;Check out Lizzie's &lt;a href="https://www.youtube.com/watch?v=vjgHr5temTM&amp;amp;list=PLpi1lPB6opQyLjI99abvDXZ-OsWbO4Yt4&amp;amp;index=4" rel="noopener noreferrer"&gt;full session on YouTube&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/elsiegle/" rel="noopener noreferrer"&gt;connect with them on LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tobin South (WorkOS): Securing MCP Servers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.17.38%25E2%2580%25AFPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.17.38%25E2%2580%25AFPM.png" alt="Tobin South" width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
Tobin delivered a high-impact talk focused on the security foundations of MCP infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Highlighted common authentication pitfalls in community-deployed MCP servers and the growing need for OAuth and SSO.&lt;/li&gt;
&lt;li&gt;Gave a crash course in OAuth and SSO for MCP, including how to support dynamic client registration properly.&lt;/li&gt;
&lt;li&gt;Demoed a secure, OAuth-powered MCP server for ordering custom swag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; If you’re shipping MCP servers to users or enterprises, this talk is your must-watch.&lt;/p&gt;

&lt;p&gt;Check out Tobins &lt;a href="https://youtu.be/Zk3V0QE9Uho?feature=shared" rel="noopener noreferrer"&gt;full session on YouTube&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/tobinsouth" rel="noopener noreferrer"&gt;connect with them on LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Michael Watson (Apollo): Token-Efficient MCP Servers with GraphQL
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.23.40%25E2%2580%25AFPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwp.apollographql.com%2Fwp-content%2Fuploads%2F2025%2F07%2FScreenshot-2025-07-01-at-3.23.40%25E2%2580%25AFPM.png" alt="Michael Watson" width="800" height="446"&gt;&lt;/a&gt;&lt;br&gt;
Watson closed the night with a deep dive into how token efficiency and schema-first tooling can make LLM interactions smarter and cheaper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Token bloat audit: Analyzed GitHub's MCP server responses which produced 8.5k+ tokens per tool call caused by duplicate and irrelevant fields.&lt;/li&gt;
&lt;li&gt;Selective GraphQL queries: Used GraphQL selection sets to omit unneeded data, reducing token usage by 75% to about 2,000 tokens.&lt;/li&gt;
&lt;li&gt;Hot-reloadable tools: Showed how the &lt;a href="https://www.apollographql.com/docs/apollo-mcp-server" rel="noopener noreferrer"&gt;Apollo MCP server&lt;/a&gt; can load tools directly from .graphql files with no build step needed.&lt;/li&gt;
&lt;li&gt;Live demo: Used Goose to introspect GitHub’s GraphQL schema and auto-generate a working tool in minutes, then hot-loaded it into the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; GraphQL-first design gives you clean, typed, and cost-efficient tooling which is a superpower for scaling MCP use.&lt;/p&gt;

&lt;p&gt;Check out Watson's &lt;a href="https://youtu.be/_7J1A4IXh-s?feature=shared" rel="noopener noreferrer"&gt;full session on YouTube&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/michael-watson-%F0%9F%96%A5-85844442" rel="noopener noreferrer"&gt;connect with them on LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next: Join the MCP Server Builder Community
&lt;/h2&gt;

&lt;p&gt;This was just the beginning. Our mission is to support and grow a community of developers building smarter MCP tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out our MCP Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apollo's own &lt;a href="https://github.com/apollographql/apollo-mcp-server" rel="noopener noreferrer"&gt;MCP server is open source&lt;/a&gt; and we have a &lt;a href="https://www.apollographql.com/tutorials/intro-mcp-graphql" rel="noopener noreferrer"&gt;full tutorial on Odyssey&lt;/a&gt; that will get you up and running building your own tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Join future events&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We host meetups regularly. Check out our &lt;a href="https://lu.ma/mcp-server" rel="noopener noreferrer"&gt;Luma event page&lt;/a&gt; to RSVP for the next one in SF, NYC, or virtually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get involved&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Got something to show? Built a server? Join our talks or share in the &lt;a href="https://community.apollographql.com/c/mcp-server/41" rel="noopener noreferrer"&gt;Apollo Community&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>graphql</category>
      <category>community</category>
    </item>
    <item>
      <title>From REST to GraphQL in minutes with prebuilt Connectors</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Fri, 13 Jun 2025 15:25:32 +0000</pubDate>
      <link>https://forem.com/apollographql/from-rest-to-graphql-in-minutes-with-prebuilt-connectors-nkf</link>
      <guid>https://forem.com/apollographql/from-rest-to-graphql-in-minutes-with-prebuilt-connectors-nkf</guid>
      <description>&lt;p&gt;We’ve all been there: vague API docs, an outdated OpenAPI spec, and a half-buried list of endpoints that leave you guessing. With prebuilt &lt;a href="https://www.apollographql.com/docs/graphos/connectors" rel="noopener noreferrer"&gt;REST Connectors&lt;/a&gt;, you skip the guesswork. Just download the schema files, run your graph locally, and start querying live data with GraphQL.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll use the &lt;a href="https://thespacedevs.com/llapi" rel="noopener noreferrer"&gt;Space Devs Launch Library 2&lt;/a&gt; in the &lt;a href="https://github.com/apollographql/connectors-community/tree/main" rel="noopener noreferrer"&gt;Connectors Community repo&lt;/a&gt; as an example to show how easy it is to integrate a REST API into your graph in minutes and how you can tailor it to meet your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;-Create an &lt;a href="https://studio.apollographql.com/?_gl=1*1i71c12*_gcl_aw*R0NMLjE3NDk4MjU4MTguQ2owS0NRandtS19DQmhDRUFSSXNBTUt3Y0Q3alNWdTlxbVUwV2NwSmhYbThWR2pWM2V0TERaWFN2Z25HeHBTYVdtLS1aTFd0S25YRmwzUWFBc0J3RUFMd193Y0I.*_gcl_au*NjA0MzcxODg1LjE3NDU4NzE2ODA." rel="noopener noreferrer"&gt;Apollo Studio&lt;/a&gt; account. This allows you to create and manage your graph, providing the necessary credentials, APOLLO_KEY and APOLLO_GRAPH_REF (more on those later…) which the Apollo Router uses to fetch the supergraph schema and run locally.&lt;br&gt;
-&lt;a href="https://www.apollographql.com/docs/rover/getting-started#installation-methods" rel="noopener noreferrer"&gt;Install&lt;/a&gt; and &lt;a href="https://www.apollographql.com/docs/rover/configuring" rel="noopener noreferrer"&gt;authenticate&lt;/a&gt; Rover CLI which we will use for configuring our graph and running Apollo Router locally.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;In your working directory, initialize your schema with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rover init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the option to create a new graph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ogqiqgrv2t7gwv702z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ogqiqgrv2t7gwv702z7.png" alt="image of the terminal with create a new graph selected" width="800" height="280"&gt;&lt;/a&gt;&lt;br&gt;
Then “Start a graph with one or more REST APIs”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepfta3xr068a6ykj9e5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepfta3xr068a6ykj9e5a.png" alt="inage of the terminal with selecti option: create a new graph" width="800" height="302"&gt;&lt;/a&gt;&lt;br&gt;
Next you will name your graph and Rover will generate you a new ID for your graph and APOLLO_KEY.  Once your APOLLO_KEY and GRAPH_REF are generated,you can start the router with the command provided, but I would recommend putting these values in an VSCode &amp;gt; settings.json. The configuration below will ensure these environment variables are loaded into any new VSCode terminal windows you open:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;

 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;terminal.integrated.profiles.osx"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;

   &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;graphos"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;

     &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zsh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;

     &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;args"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-l"&lt;/span&gt;&lt;span class="pi"&gt;],&lt;/span&gt;

     &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;

       &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;APOLLO_KEY"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;YOUR_KEY&amp;gt;"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;

       &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;APOLLO_GRAPH_REF"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;GRAPH_REF&amp;gt;"&lt;/span&gt;

     &lt;span class="pi"&gt;}&lt;/span&gt;

   &lt;span class="pi"&gt;}&lt;/span&gt;

 &lt;span class="pi"&gt;},&lt;/span&gt;

 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;terminal.integrated.defaultProfile.osx"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;graphos"&lt;/span&gt;

&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rover generated a supergraph.yaml file for you. You can delete this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using thespacedevs prebuilt connector
&lt;/h2&gt;

&lt;p&gt;Next, navigate to &lt;a href="https://github.com/apollographql/connectors-community/tree/main/connectors/thespacedevs" rel="noopener noreferrer"&gt;thespacedevs folder&lt;/a&gt; in the Connectors Community repo. This Connector is an implementation of the &lt;a href="https://thespacedevs.com/llapi" rel="noopener noreferrer"&gt;Space Devs Launch Library 2&lt;/a&gt;.  Download all of the graphql files here and the supergraph.yaml.  You can add these to the root of your project folder. Rover generates test schema files and a supergraph. Anything that is duplicated or you do not need can be removed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy28ybqbvql9kjd3lveqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy28ybqbvql9kjd3lveqs.png" alt="Image of the space devs repo in github with files surrounded by a red box" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, open your terminal and run the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rover dev --supergraph-config supergraph.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! Your graph is live at &lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt;. Pop it open in your browser to explore the queries in Apollo Sandbox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3tdi2i33u9iaoziuhk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3tdi2i33u9iaoziuhk.gif" alt="gif of launches query executing" width="640" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Modifying a prebuilt connector
&lt;/h2&gt;

&lt;p&gt;You can use a prebuilt Connector as is or modify it for your application.  For example, I’d like to add more information about what a valid search string looks like for upcoming launches.&lt;/p&gt;

&lt;p&gt;In launches.graphql find the upcoming launches query and place this comment above it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Fetches a list of upcoming launches by agency name. 
Try Starlink, Nasa, or Exa for example.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="nf"&gt;upcomingLaunches&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;search&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;LaunchConnection&lt;/span&gt;

   &lt;span class="nd"&gt;@connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

     &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llv2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

     &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

       &lt;span class="n"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/launches/upcoming/?search={$args.search}&amp;amp;limit={$args.limit}&amp;amp;offset={$args.offset}&amp;amp;format=json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

     &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="n"&gt;rest&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save your file and you will see your new comment as context in the query builder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fequwlkkqemkd22nvojrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fequwlkkqemkd22nvojrp.png" alt="Sandbox with new description added under the upcoming launches query" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;Once you have the prebuilt Connector ready for your use case now you are ready to integrate it to a client.  To continue exploring where to use this Connector, take a look at &lt;a href="https://www.apollographql.com/blog/simplify-your-rest-api-logic-in-react-with-connectors-for-rest-apis-and-graphql" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; to learn how to replace traditional REST API calls in a React and Next.js application. Or write your very first MCP tool using the &lt;a href="https://www.apollographql.com/docs/apollo-mcp-server" rel="noopener noreferrer"&gt;Apollo MCP server&lt;/a&gt;. There’s even a &lt;a href="https://github.com/apollographql/apollo-mcp-server/tree/main/graphql/TheSpaceDevs" rel="noopener noreferrer"&gt;ready to go example&lt;/a&gt; using The Space Devs Api for MCP you can use.&lt;/p&gt;

&lt;p&gt;There are many prebuilt Connectors ready to use or extend including &lt;a href="https://github.com/apollographql/connectors-community/tree/main/connectors/stripe" rel="noopener noreferrer"&gt;Stripe&lt;/a&gt;, &lt;a href="https://github.com/apollographql/connectors-community/tree/main/connectors/openai" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;, &lt;a href="https://github.com/apollographql/connectors-community/tree/main/connectors/aws" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;, and more.  Motivated to build your own Connectors to share with the community? We welcome contributions! Learn how to submit your Connector in the &lt;a href="https://github.com/apollographql/connectors-community" rel="noopener noreferrer"&gt;Connectors Community repo&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>restapi</category>
      <category>graphql</category>
    </item>
    <item>
      <title>Simplify Your REST API Logic in React with Connectors for REST APIs and GraphQL</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Thu, 29 May 2025 20:10:27 +0000</pubDate>
      <link>https://forem.com/apollographql/simplify-your-rest-api-logic-in-react-with-connectors-for-rest-apis-and-graphql-5ab7</link>
      <guid>https://forem.com/apollographql/simplify-your-rest-api-logic-in-react-with-connectors-for-rest-apis-and-graphql-5ab7</guid>
      <description>&lt;p&gt;If you’ve built a React or Next.js app that talks to multiple REST APIs, you’ve probably got a file like actions.ts set up as a central spot for fetch calls to public services like the USGS Earthquake API or Nominatim’s reverse geocoder. It works, but the code often ends up repetitive, brittle, and difficult to maintain or scale when different endpoints need to talk to each other.&lt;/p&gt;

&lt;p&gt;In this post, we’re going to take that same setup and clean it up with a GraphQL layer powered by &lt;a href="https://www.apollographql.com/docs/graphos/connectors" rel="noopener noreferrer"&gt;Apollo Connectors&lt;/a&gt;. Instead of orchestrating data in actions.ts, we’ll define a GraphQL schema that does the work for us. Using a declarative configuration, we’ll unify earthquake and location data in a single query with no need for custom resolvers or custom backend logic. The goal: to make your frontend simpler and your data fetching smarter, without giving up the REST services and patterns you already know.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create an &lt;a href="https://studio.apollographql.com/" rel="noopener noreferrer"&gt;Apollo Studio&lt;/a&gt; account. This allows you to create and manage your graph, providing the necessary credentials APOLLO_KEY and APOLLO_GRAPH_REF (more on those later…), which the Apollo Router uses to fetch the supergraph schema and run locally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.apollographql.com/docs/rover/getting-started#installation-methods" rel="noopener noreferrer"&gt;Install&lt;/a&gt; and &lt;a href="https://www.apollographql.com/docs/rover/configuring" rel="noopener noreferrer"&gt;authenticate&lt;/a&gt; Rover CLI, which we will use for configuring our graph and running Apollo Router locally.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;In the directory you want to host your schema in run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rover init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the prompts in the CLI to create a new project.&lt;/p&gt;

&lt;p&gt;Rover will generate a command for you that contains your APOLLO_KEY and GRAPH_REF.  You can start the Router with the command provided, but I would recommend putting these values in an VSCode &amp;gt; settings.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.profiles.osx"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"graphos"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"zsh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-l"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"APOLLO_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;YOUR KEY&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"APOLLO_GRAPH_REF"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;YOUR GRAPH REF&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.defaultProfile.osx"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"graphos"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next create a file at the root called router.yaml and place this code in it.  This is to override CORS errors while working in dev. &lt;a href="https://www.apollographql.com/docs/graphos/routing/configuration/yaml#cors" rel="noopener noreferrer"&gt;This is not for use in production.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;allow_any_origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you haven’t already open your project in VSCode or your IDE of choice and from the terminal run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;rover dev --router-config router.yaml --supergraph-config supergraph.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the Schema
&lt;/h2&gt;

&lt;p&gt;Create a new file called earthquake.graphql and paste the following code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: If you prefer, &lt;a href="https://github.com/apollographql/connectors-community/blob/main/connectors/usgs/earthquakes-nominatum/earthquake-simple.graphql" rel="noopener noreferrer"&gt;the completed schema can be found here&lt;/a&gt;.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://specs.apollo.dev/federation/v2.10&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="c1"&gt;# Enable this schema to use Apollo Federation features
&lt;/span&gt;&lt;span class="nd"&gt;@link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="c1"&gt;# Enable this schema to use Apollo Connectors
&lt;/span&gt;   &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://specs.apollo.dev/connect/v0.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@connect&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@source&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usgs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;baseURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://earthquake.usgs.gov/fdsnws/event/1/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="n"&gt;EarthquakeProperties&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;mag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Float&lt;/span&gt;
 &lt;span class="n"&gt;place&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;
 &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Float&lt;/span&gt;
 &lt;span class="n"&gt;updated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Float&lt;/span&gt;
 &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;
 &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="n"&gt;EarthquakeGeometry&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="c1"&gt;#this returns 3 values - lon,lat,depth in km
&lt;/span&gt; &lt;span class="n"&gt;coordinates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Earthquake&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;
 &lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;EarthquakeProperties&lt;/span&gt;
 &lt;span class="n"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;EarthquakeGeometry&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Query&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nf"&gt;recentEarthquakes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Int&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="n"&gt;lat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="n"&gt;lon&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="n"&gt;maxRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Int&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Earthquake&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
   &lt;span class="nd"&gt;@connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;usgs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
     &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="n"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query?format=geojson&amp;amp;latitude={$args.lat}&amp;amp;longitude={$args.lon}&amp;amp;maxradiuskm={$args.maxRadius}&amp;amp;limit={$args.limit}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="n"&gt;selection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
       $.features
      {
       properties{
         mag
         place
         time
         updated
         url
         title
       }
       geometry{
        coordinates
       }
       id
     }

     &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
   &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first two directives following @ link are required at the top of any file to enable Connectors and federation. The @ source directive is used to point to our base URL for USGS earthquakes.&lt;/p&gt;

&lt;p&gt;Underneath this there are three types: Earthquake, EarthquakeProperties, and EarthquakeGeometry. Here we are defining what we want to make available in our schema. You can build this however you want and include as much of the REST API as is valuable for your application. Here you will also define all the types and what values are nullable.&lt;/p&gt;

&lt;p&gt;Finally you will see the query type. In this schema we only have one query, but you can build out as many as you need. For example, while this calls recent earthquakes and takes in four parameters, you may also want to include a query for retrieving one earthquake by Id.  What you design in this schema is dependent upon the needs of your application and frontend team. Inside this query, you will see the @connect directive which is where you declare what populates this query and how it will return in the selection set below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding a second REST API
&lt;/h2&gt;

&lt;p&gt;Next, we want to add extended location details for each earthquake using Nominatim. To do this, add in another source directive below the USGS one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@source&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;location&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="n"&gt;baseURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://nominatim.openstreetmap.org/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
     &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User-Agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;testing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nominatim requires user-agent headers but the value passed can be anything you want.  &lt;/p&gt;

&lt;p&gt;Next, in your Earthquake type add a new field “display_name” and the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Earthquake&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;
 &lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;EarthquakeProperties&lt;/span&gt;
 &lt;span class="n"&gt;geometry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;EarthquakeGeometry&lt;/span&gt;
 &lt;span class="n"&gt;display_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;
 &lt;span class="nd"&gt;@connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;location&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="n"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reverse?lat={$this.geometry.coordinates-&amp;gt;slice(1,2)-&amp;gt;first}&amp;amp;lon={$this.geometry.coordinates-&amp;gt;first}&amp;amp;format=json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="n"&gt;selection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
   $.display_name
   &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use a Connector for REST to declare that display_name comes from Nominatum. Later in the React app, we’ll cover how you can alias this to what you are using on your frontend. You can also alias here in the schema if you choose. The $this key references the parent Earthquake object.&lt;/p&gt;

&lt;p&gt;Your schema is ready, the last thing we need to do before testing it is to update your supergraph.yaml to point to your file. It should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;subgraphs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;earthquake&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;routing_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://localhost:4000&lt;/span&gt;
   &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;earthquake.graphql&lt;/span&gt;
&lt;span class="na"&gt;federation_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;=2.10.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To learn about configuring your Router, you can &lt;a href="https://www.apollographql.com/docs/rover/commands/supergraphs#yaml-configuration-file" rel="noopener noreferrer"&gt;head over to the documentation&lt;/a&gt; to see more options especially relevant once you are ready to go to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Query
&lt;/h2&gt;

&lt;p&gt;Head to &lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt; to create and test the query for your app.&lt;/p&gt;

&lt;p&gt;Once you have confirmed that your graph is working as expected, it’s time to move to the React app to see what we need to modify. Leave your local Router running as you will need it to be able to test your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modifying the React App
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/AmandaApollo/earthquake-finder" rel="noopener noreferrer"&gt;Clone the repo&lt;/a&gt; and install the dependencies, run the project. &lt;/p&gt;

&lt;p&gt;Open actions.ts so we can investigate the API calls.&lt;/p&gt;

&lt;p&gt;There are two functions here, one to call USGS and a second to then call Nominatim on each returned value. There is also some code here to create the necessary object for our frontend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;searchEarthquakes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="nx"&gt;latitude&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;longitude&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;maxRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;limit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Earthquake&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="c1"&gt;// Build USGS Earthquake API URL&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&amp;amp;latitude=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;latitude&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;longitude=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;longitude&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;maxradius=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;maxRadius&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;limit=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


   &lt;span class="c1"&gt;// Fetch earthquake data&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;span class="c1"&gt;// rest of code omitted here. . . . .&lt;/span&gt;


   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;earthquakesWithLocation&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error fetching earthquake data:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Integrating your GraphQL query doesn’t have to be complicated. If you’re comfortable writing a query and calling fetch, you already know 90% of what you need. Head back to the GraphQL sandbox and click the 3 dots to the right of your query. Then select copy to cURL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figehva5qrqot9wl517ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figehva5qrqot9wl517ig.png" alt="Apollo Sandbox with copy to curl button highlighted" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: If you would rather follow along with the final version, &lt;a href="https://github.com/AmandaApollo/earthquake-finder/tree/graphql-version" rel="noopener noreferrer"&gt;take a look at this branch.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In actions.ts inside the try/catch block at the top, paste your curl.  This is a little difficult to read so if you are in VSCode, this is a good place to use copilot to make this more readable.  The end result should look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`
    query RecentEarthquakes($limit: Int!, $lat: String!, $lon: String!, $maxRadius: Int!) {
      recentEarthquakes(limit: $limit, lat: $lat, lon: $lon, maxRadius: $maxRadius) {
        id
        locationDetails: display_name
        properties {
          mag
          place
          time
          updated
          url
          title
        }
        geometry {
          coordinates
        }
      }
    }
  `&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;maxRadius&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;lat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;latitude&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;lon&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;longitude&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;


  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:4000/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`GraphQL API error: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;statusText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;


  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;


  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;earthquakes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;recentEarthquakes&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;earthquakes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;earthquakes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you have your query, the parameters needed from the frontend, and a fetch call to your graph. Notice we are providing an alias to display_name to locationDetails to match our frontends shape.&lt;/p&gt;

&lt;p&gt;Let’s clean up the old code.&lt;/p&gt;

&lt;p&gt;In the try/catch block you can remove the rest of the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Build USGS Earthquake API URL&lt;/span&gt;
   &lt;span class="c1"&gt;// const url = `https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&amp;amp;latitude=${latitude}&amp;amp;longitude=${longitude}&amp;amp;maxradius=${maxRadius}&amp;amp;limit=${limit}`;&lt;/span&gt;


   &lt;span class="c1"&gt;// // Fetch earthquake data&lt;/span&gt;
   &lt;span class="c1"&gt;// const response = await fetch(url);&lt;/span&gt;


   &lt;span class="c1"&gt;// if (!response.ok) {&lt;/span&gt;
   &lt;span class="c1"&gt;//   throw new Error(`USGS API error: ${response.statusText}`);&lt;/span&gt;
   &lt;span class="c1"&gt;// }&lt;/span&gt;


   &lt;span class="c1"&gt;// const data = await response.json();&lt;/span&gt;
   &lt;span class="c1"&gt;// const earthquakes = data.features as Earthquake[];&lt;/span&gt;


   &lt;span class="c1"&gt;// Get location details for each earthquake&lt;/span&gt;
   &lt;span class="c1"&gt;// const earthquakesWithLocation = await Promise.all(&lt;/span&gt;
   &lt;span class="c1"&gt;//   earthquakes.map(async (quake) =&amp;gt; {&lt;/span&gt;
   &lt;span class="c1"&gt;//     try {&lt;/span&gt;
   &lt;span class="c1"&gt;//       // Get location details from Nominatim&lt;/span&gt;
   &lt;span class="c1"&gt;//       const locationDetails = await getLocationDetails(&lt;/span&gt;
   &lt;span class="c1"&gt;//         quake.geometry.coordinates[1], // latitude&lt;/span&gt;
   &lt;span class="c1"&gt;//         quake.geometry.coordinates[0] // longitude&lt;/span&gt;
   &lt;span class="c1"&gt;//       );&lt;/span&gt;


   &lt;span class="c1"&gt;//       // Add location details to earthquake properties&lt;/span&gt;
   &lt;span class="c1"&gt;//       return {&lt;/span&gt;
   &lt;span class="c1"&gt;//         ...quake,&lt;/span&gt;
   &lt;span class="c1"&gt;//         locationDetails: locationDetails ?? undefined,&lt;/span&gt;
   &lt;span class="c1"&gt;//       };&lt;/span&gt;
   &lt;span class="c1"&gt;//     } catch (error) {&lt;/span&gt;
   &lt;span class="c1"&gt;//       console.error(&lt;/span&gt;
   &lt;span class="c1"&gt;//         `Error getting location details for earthquake ${quake.id}:`,&lt;/span&gt;
   &lt;span class="c1"&gt;//         error&lt;/span&gt;
   &lt;span class="c1"&gt;//       );&lt;/span&gt;
   &lt;span class="c1"&gt;//       return quake;&lt;/span&gt;
   &lt;span class="c1"&gt;//     }&lt;/span&gt;
   &lt;span class="c1"&gt;//   })&lt;/span&gt;
   &lt;span class="c1"&gt;// );&lt;/span&gt;


   &lt;span class="c1"&gt;// Log the first earthquake with location details for debugging&lt;/span&gt;
 &lt;span class="c1"&gt;//   if (earthquakesWithLocation.length &amp;gt; 0) {&lt;/span&gt;
 &lt;span class="c1"&gt;//     console.log(&lt;/span&gt;
 &lt;span class="c1"&gt;//       "First earthquake with location details:",&lt;/span&gt;
 &lt;span class="c1"&gt;//       JSON.stringify(&lt;/span&gt;
 &lt;span class="c1"&gt;//         {&lt;/span&gt;
 &lt;span class="c1"&gt;//           id: earthquakesWithLocation[0].id,&lt;/span&gt;
 &lt;span class="c1"&gt;//           hasLocationDetails:&lt;/span&gt;
 &lt;span class="c1"&gt;//             !!earthquakesWithLocation[0].locationDetails,&lt;/span&gt;
 &lt;span class="c1"&gt;//           locationDetails:&lt;/span&gt;
 &lt;span class="c1"&gt;//             earthquakesWithLocation[0].locationDetails,&lt;/span&gt;
 &lt;span class="c1"&gt;//         },&lt;/span&gt;
 &lt;span class="c1"&gt;//         null,&lt;/span&gt;
 &lt;span class="c1"&gt;//         2&lt;/span&gt;
 &lt;span class="c1"&gt;//       )&lt;/span&gt;
 &lt;span class="c1"&gt;//     );&lt;/span&gt;
 &lt;span class="c1"&gt;//   }&lt;/span&gt;


 &lt;span class="c1"&gt;//   return earthquakesWithLocation;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also remove the entire function call for getLocationDetails.  We are now calling exactly what we need in one GraphQL query so we do not need this second function or API call here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getLocationDetails&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="c1"&gt;//   latitude: number,&lt;/span&gt;
&lt;span class="c1"&gt;//   longitude: number&lt;/span&gt;
&lt;span class="c1"&gt;// ): Promise&amp;lt;string | null&amp;gt; {&lt;/span&gt;
&lt;span class="c1"&gt;//   try {&lt;/span&gt;
&lt;span class="c1"&gt;//     // Add a small delay to avoid rate limiting&lt;/span&gt;
&lt;span class="c1"&gt;//     await new Promise((resolve) =&amp;gt; setTimeout(resolve, 100));&lt;/span&gt;


&lt;span class="c1"&gt;//     const url = `https://nominatim.openstreetmap.org/reverse?format=json&amp;amp;lat=${latitude}&amp;amp;lon=${longitude}`;&lt;/span&gt;


&lt;span class="c1"&gt;//     const response = await fetch(url, {&lt;/span&gt;
&lt;span class="c1"&gt;//       headers: {&lt;/span&gt;
&lt;span class="c1"&gt;//         "User-Agent": "EarthquakeSearchApp/1.0",&lt;/span&gt;
&lt;span class="c1"&gt;//       },&lt;/span&gt;
&lt;span class="c1"&gt;//       // Ensure we don't cache the response&lt;/span&gt;
&lt;span class="c1"&gt;//       cache: "no-store",&lt;/span&gt;
&lt;span class="c1"&gt;//     });&lt;/span&gt;


&lt;span class="c1"&gt;//     if (!response.ok) {&lt;/span&gt;
&lt;span class="c1"&gt;//       console.error(&lt;/span&gt;
&lt;span class="c1"&gt;//         `Nominatim API error: ${response.status} ${response.statusText}`&lt;/span&gt;
&lt;span class="c1"&gt;//       );&lt;/span&gt;
&lt;span class="c1"&gt;//       return null;&lt;/span&gt;
&lt;span class="c1"&gt;//     }&lt;/span&gt;


&lt;span class="c1"&gt;//     const data = await response.json();&lt;/span&gt;


&lt;span class="c1"&gt;//     // Log the response for debugging&lt;/span&gt;
&lt;span class="c1"&gt;//     console.log(&lt;/span&gt;
&lt;span class="c1"&gt;//       `Location details for ${latitude},${longitude}:`,&lt;/span&gt;
&lt;span class="c1"&gt;//       JSON.stringify({ display_name: data.display_name }, null, 2)&lt;/span&gt;
&lt;span class="c1"&gt;//     );&lt;/span&gt;


&lt;span class="c1"&gt;//     return data.display_name ?? null;&lt;/span&gt;
&lt;span class="c1"&gt;//   } catch (error) {&lt;/span&gt;
&lt;span class="c1"&gt;//     console.error("Error getting location details:", error);&lt;/span&gt;
&lt;span class="c1"&gt;//     return null;&lt;/span&gt;
&lt;span class="c1"&gt;//   }&lt;/span&gt;
&lt;span class="c1"&gt;//}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By modifying this file to use our new GraphQL query, we eliminated over 50 lines of redundant code without changing any UI logic. It’s a reminder that simplicity scales and a practical example of how to keep your GraphQL integration dead simple.&lt;/p&gt;

&lt;p&gt;Another interesting thing to notice here is that searchByPlace is still intact and using REST.  Using Graphql is not all or nothing, you can adopt it in your applications when it makes sense and use it alongside your other REST API calls. This allows you to adopt Graphql slowly without breaking other parts of your application which is especially important if you work with multiple teams.&lt;/p&gt;

&lt;p&gt;Save your files and navigate to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; to see your updates in action.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihvmex5gbtt32j791rcp.gif" class="article-body-image-wrapper"&gt;&lt;img width="560" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihvmex5gbtt32j791rcp.gif" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;While this example uses Apollo Router and Connectors for REST APIs to keep things simple and local, in a production app you’d typically pair this setup with Apollo Client on the frontend. Apollo Client handles caching, state management, and reactive updates, all the things you’d expect in a mature GraphQL application.&lt;/p&gt;

&lt;p&gt;But the key point here is GraphQL doesn’t have to be all-or-nothing or hard to adopt. By adding a lightweight GraphQL layer over your existing REST services using Connectors, you can reduce boilerplate, simplify frontend data fetching, and set yourself up for more scalable, maintainable code. And when you’re ready, tools like Apollo Client make it easy to fully integrate this into your application architecture.&lt;/p&gt;

&lt;p&gt;To get started building your first Apollo Connector for REST APIs today, check out &lt;a href="https://www.apollographql.com/docs/graphos/connectors" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt; or get started with &lt;a href="https://github.com/apollographql/connectors-community" rel="noopener noreferrer"&gt;pre-built connectors&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>react</category>
      <category>nextjs</category>
      <category>graphql</category>
      <category>api</category>
    </item>
    <item>
      <title>Authenticated tests with Playwright, Prisma, Postgres, and NextAuth</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Tue, 06 Aug 2024 13:47:51 +0000</pubDate>
      <link>https://forem.com/amandamartindev/authenticated-tests-with-playwright-prisma-postgres-and-nextauth-12pc</link>
      <guid>https://forem.com/amandamartindev/authenticated-tests-with-playwright-prisma-postgres-and-nextauth-12pc</guid>
      <description>&lt;p&gt;Testing authenticated pages is hard. Lets demystify it!&lt;/p&gt;

&lt;p&gt;In EddieHub community projects it's common to place authenticated pages behind a GitHub SSO login as it's the perfect fit for an Open Source project but when it comes time to test these pages it proved challenging to create a test that could pass the login flow as close to the user experience as possible.&lt;/p&gt;

&lt;p&gt;This flow was based on inspiration from &lt;a href="https://dev.to/kuroski/writing-integration-tests-for-nextjs-next-auth-prisma-using-playwright-and-msw-388m"&gt;this post&lt;/a&gt; which has a similar approach but with a mock server which was not needed for our test.&lt;/p&gt;

&lt;p&gt;This implementation was finished up on a Livestream.  If you wanna check it out, you can see the &lt;a href="https://www.youtube.com/live/dGa4sJXtbrU?feature=shared" rel="noopener noreferrer"&gt;replay on YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And finally before we dive in - this work and blog is a community effort with huge thanks to &lt;a href="https://github.com/eddiejaoude" rel="noopener noreferrer"&gt;Eddie Jaoude&lt;/a&gt; and &lt;a href="https://github.com/dan-mba" rel="noopener noreferrer"&gt;Dan B&lt;/a&gt; for getting this code finalized and merged.  If you are interested in geeking out in the Eddiehub community with us, you can check out the &lt;a href="https://github.com/EddieHubCommunity" rel="noopener noreferrer"&gt;Eddiehub Github Org&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Project
&lt;/h2&gt;

&lt;p&gt;This code is implemented in a project called &lt;a href="https://github.com/EddieHubCommunity/HealthCheck" rel="noopener noreferrer"&gt;The Open Source HealthCheck&lt;/a&gt;. This project helps repo owners check quickly if their projects are following best practices and quickly identifies places for improvement.&lt;/p&gt;

&lt;p&gt;The following tutorial will show you how mock GitHub OAuth login is implemented in this project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NextJS / NextAuth&lt;/li&gt;
&lt;li&gt;Postgres&lt;/li&gt;
&lt;li&gt;Prisma&lt;/li&gt;
&lt;li&gt;Playwright&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that this tutorial assumes you have this tech stack set up and playwright configured with your database to run in github actions. &lt;/p&gt;

&lt;p&gt;Interested in a version of this using MongoDb and Mongoose? Check out the &lt;a href="https://github.com/EddieHubCommunity/BioDrop/blob/main/tests/setup/auth.js" rel="noopener noreferrer"&gt;code here&lt;/a&gt; in BioDrop. The code is almost the same except for the creation of the database user and some schema modifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement : Setup auth
&lt;/h2&gt;

&lt;p&gt;This code will mock the initial login request to GitHub by NextAuth. To read more about the NextAuth GitHub provider, check out &lt;a href="https://next-auth.js.org/providers/github" rel="noopener noreferrer"&gt;the docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Prefer to see the full implementation instead of a tutorial, &lt;a href="https://github.com/EddieHubCommunity/HealthCheck/blob/main/tests/setup/auth.js" rel="noopener noreferrer"&gt;head to the repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's get started...&lt;/p&gt;

&lt;p&gt;Create a file called &lt;code&gt;auth.js&lt;/code&gt; in &lt;code&gt;tests/setup/auth.js&lt;/code&gt; - this file makes sure that all the needed values for the auth process are in place for the test login/logout&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup a test user in your db. This user will not be stored in your production db but only used in the test environment. What is required to create a user will depend on your schema.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;login&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Authenticated User&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;authenticated-user@test.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/mona.png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;emailVerified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;testUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Failed to create or retrieve test authenticated user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Test authenticated user creation failed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. Next, we need to creat a valid JWT for GitHub OAuth. The values used here are expected by GitHub including the profile image, access token (this is fake but in a similar structure), user email and name (in the spread &lt;code&gt;...user&lt;/code&gt;) and the &lt;code&gt;sub&lt;/code&gt; which is the mock GitHub id. For more information on GitHub OAuth, check out &lt;a href="https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps" rel="noopener noreferrer"&gt;the docs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sessionToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/mona.png&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;accessToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ggg_zZl1pWIvKkf3UDynZ09zLvuyZsm1yC0YoRPt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NEXTAUTH_SECRET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Now that you have the valid JWT, you can create an authenticated user session with Prisma. Of note here is to make sure that your expiry date is in the future. By setting this date to a future date, GitHub won't be called to obtain new tokens while your tests are running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;sessionToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;expires&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getFullYear&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMonth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;sessionToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sessionToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Test authenticated session creation failed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Next you will create the mock account and insert it into the database.  Note here that the access token you use should match the one you used in JWT creation. These values will depend on your database schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;oauth&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;github&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;providerAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;access_token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ggg_zZl1pWIvKkf3UDynZ09zLvuyZsm1yC0YoRPt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;token_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bearer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;read:org,read:user,repo,user:email,test:all&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;account&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;provider_providerAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;github&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;providerAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;testUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Test account creation failed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5. Now that everything is setup you have what you need to create a new browser context, add your session token, and browse to a page as an authenticated user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newContext&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addCookies&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next-auth.session-token&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sessionToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;127.0.0.1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;httpOnly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;sameSite&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Lax&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;secure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;expires&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6. Finally, in a separate function create a logout flow that clears out the cookie.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newContext&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clearCookies&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to implement : Write the tests
&lt;/h2&gt;

&lt;p&gt;1. In a separate file &lt;code&gt;add.spec.ts&lt;/code&gt; in &lt;code&gt;tests/account/repo&lt;/code&gt; you will write your tests.  Where you do this depends on your approach. In the HealthCheck implementation, all tests from login through authenticated page tests are located in the same file. At the top of the file import your login/logout from the setup&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write a test for "guest cannot login" to ensure an unauthenticated user cannot reach pages they should not. Here you will call logout first to ensure there is no saved authenticated session.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Guest user cannot access add repo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;logout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/account/repo/add&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3/. Next write tests for your logged in users.  In the HealthCheck repo, we wanted to ensure that they could see the authenticated nav and get to an authenticated page.  Your tests will depend upon the needs of your app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Logged in user can access add repo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/account/repo/add&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/account&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;repo&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;add/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Logged in user can see add user nav button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;link&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Add&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/account&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;repo&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;add/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thanks for following along.  You can find the &lt;a href="https://github.com/EddieHubCommunity/HealthCheck/blob/main/tests/setup/auth.js" rel="noopener noreferrer"&gt;full solution here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If this interests you, stay tuned for part 2 of this blog where we setup a mock server to make GitHub api calls after login. &lt;/p&gt;

</description>
      <category>playwright</category>
      <category>nextjs</category>
      <category>prisma</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Value of Community Conferences</title>
      <dc:creator>Amanda</dc:creator>
      <pubDate>Wed, 17 Jul 2024 01:12:02 +0000</pubDate>
      <link>https://forem.com/amandamartindev/the-value-of-community-conferences-5fh3</link>
      <guid>https://forem.com/amandamartindev/the-value-of-community-conferences-5fh3</guid>
      <description>&lt;p&gt;Ever since returning from the last &lt;a href="https://jsconfbp.com/" rel="noopener noreferrer"&gt;JSConf Budapest&lt;/a&gt;, a question has been circling in my mind: how can we effectively sell the idea of community conferences to companies? It's a topic that seems to resonate with many in the tech field, from developers to developer advocates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Community Conferences Matter
&lt;/h2&gt;

&lt;p&gt;Those of us in tech who have been to many events big and small know there's something unmatched about a well curated community conference. They're not just events; they're breeding grounds for innovation, learning, and networking. You leave with deep knowledge, friendship, and opportuities you really can't find at larger events.&lt;/p&gt;

&lt;p&gt;Anyone you ask in tech will agree that community led events matter and need support but the reality is they are struggling. Beloved conferences like JSConf Budapest have seen their last event already while others are struggling to keep above water.&lt;/p&gt;

&lt;p&gt;So if everyone agrees that these types of events are important and necessary for developers, then what gives?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sponsors aren't sponsoring&lt;/strong&gt;. Without larger corporate sponsors it's impossible for smaller events to keep costs reasonable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why your company should sponsor community conferences
&lt;/h2&gt;

&lt;p&gt;1. &lt;strong&gt;Cost:&lt;/strong&gt; The cost of the sponsorship itself will be much less than the one boasting 5 - 10,000 attendees and a yacht party. In addition to the base cost, there will be less human cost, asset costs, and swag costs as you bend your budget to figure out how to have a cooler booth than the company next to you.&lt;/p&gt;

&lt;p&gt;2. &lt;strong&gt;Flexibiltiy&lt;/strong&gt;: You will be working directly with the community to plan your presence. While many community conferences aren't keen on vendor talks taking the main stage that doens't mean you can't show off your product in other ways. If you don't know how you can fit into the event, it's easy to get in touch with the organizers and a good developer relations team will be able to guide you on the best way to engage meaningfully in this space.&lt;/p&gt;

&lt;p&gt;3. &lt;strong&gt;Stand out from the crowd&lt;/strong&gt;: Big conferences mean big burnout for attendees and most of your interactions will be the 10 seconds folks give you to make your pitch while they grab your swag and move on. And while many bigger conferences have space for vendor talks, this is generally becuase they are multi-track and you will be competing with big names in the tech speaker space.&lt;/p&gt;

&lt;p&gt;A personal example. My first big conference as a speaker - I was given a difficult slot in a difficult location and had 4 people in the audience. This conference had 1000's of attendees.&lt;/p&gt;

&lt;p&gt;4. &lt;strong&gt;More impactful conversations:&lt;/strong&gt; Quantity != Quality. While the overall number of attendees may be lower. At a community conference you have the opportunity to really get to know attendees personally and professionally. Instead of pitching your product with the usual script or demo, you are able to have deep technical conversations with devs about what they are thinking about, building, and getting excited about. People are there to learn and be inspired and the conversations during community event downtime are always top tier.&lt;/p&gt;

&lt;p&gt;5. &lt;strong&gt;Brand bonus points&lt;/strong&gt;: Your product isnt that special (oh, you have AI now too?). I mean, maybe it is - but tech is flooded with cool tools you are absolutely in competition with. Stand out by supporting communities developers love in a meaningful way. Devs talk to each other and while we want tools that solve our problems, we also love to support companies doing good in the community. If you can be both and awesome tool and a great community member, your brand wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't get me wrong
&lt;/h2&gt;

&lt;p&gt;A note about all this - still do the big conferences but be more selective about this budget. I would argue by diversifying your approach to include MORE community driven events like smaller single track conferences, meetups, and whatever cool events devs are dreaming up that you will see a better ROI than blowing all your budget on the flashiest event you can find.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Thank You to the Community
&lt;/h2&gt;

&lt;p&gt;As I reflect on the experiences and insights gained from JSConf Budapest, I can't help but express my gratitude. Thank you to the organizers, the speakers, and every attendee who contributes to making these conferences a rich source of knowledge and community spirit.&lt;/p&gt;

&lt;p&gt;In conclusion, selling the concept of community conferences to companies should be a natural step. After all, the advantages are clear. It's about recognizing the importance of these events and the long-term value they bring to individuals and organizations alike. Thank you, once again, to everyone who plays a part in these incredible tech gatherings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drop your favorite community led event in the comments&lt;/strong&gt; and reach out to me to continue the conversation on &lt;a href="https://x.com/hey_amandam" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/amandamartin-dev/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>techtalks</category>
      <category>community</category>
      <category>marketing</category>
    </item>
  </channel>
</rss>
