<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: artespraticas</title>
    <description>The latest articles on Forem by artespraticas (@artespraticas).</description>
    <link>https://forem.com/artespraticas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/artespraticas"/>
    <language>en</language>
    <item>
      <title>I built a pay-per-use web scraping API using x402 — here's how it works</title>
      <dc:creator>artespraticas</dc:creator>
      <pubDate>Thu, 14 May 2026 08:33:26 +0000</pubDate>
      <link>https://forem.com/artespraticas/i-built-a-pay-per-use-web-scraping-api-using-x402-heres-how-it-works-14p8</link>
      <guid>https://forem.com/artespraticas/i-built-a-pay-per-use-web-scraping-api-using-x402-heres-how-it-works-14p8</guid>
      <description>&lt;p&gt;I'm a graphic designer. Not a developer. &lt;/p&gt;

&lt;p&gt;AI has been eating into my design work, so instead &lt;br&gt;
of fighting it, I decided to learn how to build &lt;br&gt;
AI-native tools. This is my story of building my &lt;br&gt;
first pay-per-use API using x402 — the new HTTP &lt;br&gt;
payment standard from Coinbase.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is x402?
&lt;/h2&gt;

&lt;p&gt;x402 revives the long-forgotten HTTP 402 "Payment &lt;br&gt;
Required" status code. Instead of subscriptions or &lt;br&gt;
API keys, clients pay per request using USDC on Base.&lt;/p&gt;

&lt;p&gt;The flow is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Client calls your endpoint&lt;/li&gt;
&lt;li&gt;Server returns 402 with payment instructions&lt;/li&gt;
&lt;li&gt;Client pays $0.01 USDC on Base&lt;/li&gt;
&lt;li&gt;Client retries with payment proof&lt;/li&gt;
&lt;li&gt;Server returns the data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No accounts. No API keys. No subscriptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;Scrape Agent — a pay-per-use web scraping API.&lt;/p&gt;

&lt;p&gt;Send any URL, pay $0.01 USDC, get back clean text, &lt;br&gt;
links, or HTML from that page.&lt;/p&gt;

&lt;p&gt;Endpoint: &lt;a href="https://scrapeagent.xyz/api/scrape/x402" rel="noopener noreferrer"&gt;https://scrapeagent.xyz/api/scrape/x402&lt;/a&gt;&lt;br&gt;
Method: POST&lt;br&gt;
Price: $0.01 USDC on Base&lt;/p&gt;

&lt;h2&gt;
  
  
  How to try it
&lt;/h2&gt;

&lt;p&gt;Install agentcash (an x402 client):&lt;/p&gt;

&lt;p&gt;npx agentcash fetch &lt;a href="https://scrapeagent.xyz/api/scrape/x402" rel="noopener noreferrer"&gt;https://scrapeagent.xyz/api/scrape/x402&lt;/a&gt; \&lt;br&gt;
  -m POST \&lt;br&gt;
  -b '{"url":"&lt;a href="https://example.com%22,%22extract%22:%22text%22%7D" rel="noopener noreferrer"&gt;https://example.com","extract":"text"}&lt;/a&gt;'&lt;/p&gt;

&lt;p&gt;You'll need a tiny amount of USDC on Base in your &lt;br&gt;
agentcash wallet. The payment happens automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The response you get back
&lt;/h2&gt;

&lt;p&gt;{&lt;br&gt;
  "protocol": "x402",&lt;br&gt;
  "version": 2,&lt;br&gt;
  "priceUSD": "0.01",&lt;br&gt;
  "url": "&lt;a href="https://example.com" rel="noopener noreferrer"&gt;https://example.com&lt;/a&gt;",&lt;br&gt;
  "status": "ok",&lt;br&gt;
  "title": "Example Domain",&lt;br&gt;
  "content": "This domain is for use in...",&lt;br&gt;
  "wordCount": 42,&lt;br&gt;
  "elapsed": 312&lt;br&gt;
}&lt;/p&gt;

&lt;h2&gt;
  
  
  The tech stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Vercel Edge Functions (serverless, fast)&lt;/li&gt;
&lt;li&gt;x402 v2 protocol&lt;/li&gt;
&lt;li&gt;USDC on Base mainnet&lt;/li&gt;
&lt;li&gt;Zod for request validation&lt;/li&gt;
&lt;li&gt;Zero dependencies for scraping (pure fetch + regex)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;Building this as a non-developer was challenging but &lt;br&gt;
completely possible with AI assistance. The x402 &lt;br&gt;
protocol is surprisingly elegant — the hardest part &lt;br&gt;
was getting the exact response format right.&lt;/p&gt;

&lt;p&gt;The biggest insight: AI agents need data. Web &lt;br&gt;
scraping is one of the most fundamental things any &lt;br&gt;
agent needs to do. Pay-per-use makes perfect sense &lt;br&gt;
for this use case — why pay a monthly subscription &lt;br&gt;
when you only scrape occasionally?&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I'm planning to build more agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image alt-text generator ($0.02/image)&lt;/li&gt;
&lt;li&gt;Brand color palette extractor ($0.02/scan)&lt;/li&gt;
&lt;li&gt;Design feedback API ($0.25/critique)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're a developer interested in the x402 &lt;br&gt;
ecosystem, check out x402.org to learn more.&lt;/p&gt;

&lt;p&gt;And if you want to try my scraping agent:&lt;br&gt;
&lt;a href="https://scrapeagent.xyz" rel="noopener noreferrer"&gt;https://scrapeagent.xyz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love feedback from the developer community — &lt;br&gt;
especially on pricing and use cases I might be &lt;br&gt;
missing!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8htpx9nr1dwywjmkzfo5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8htpx9nr1dwywjmkzfo5.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>agents</category>
      <category>webdev</category>
      <category>api</category>
    </item>
  </channel>
</rss>
