<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hiroki Honda</title>
    <description>The latest articles on Forem by Hiroki Honda (@imhiroki).</description>
    <link>https://forem.com/imhiroki</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/imhiroki"/>
    <language>en</language>
    <item>
      <title>I built a Lighthouse for MCP tools — it scores your tool definitions on every PR</title>
      <dc:creator>Hiroki Honda</dc:creator>
      <pubDate>Mon, 30 Mar 2026 10:55:43 +0000</pubDate>
      <link>https://forem.com/imhiroki/i-built-a-lighthouse-for-mcp-tools-it-scores-your-tool-definitions-on-every-pr-26ec</link>
      <guid>https://forem.com/imhiroki/i-built-a-lighthouse-for-mcp-tools-it-scores-your-tool-definitions-on-every-pr-26ec</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;AI agents choose between tools based on one thing: the quality of their descriptions.&lt;/p&gt;

&lt;p&gt;Research shows 97% of MCP tool descriptions have quality defects (arXiv 2602.14878), and optimized tools get selected &lt;strong&gt;3.6x more often&lt;/strong&gt; (arXiv 2602.18914).&lt;/p&gt;

&lt;p&gt;Most MCP developers don't know their tool definitions are broken until an agent silently ignores them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ToolRank&lt;/strong&gt; scores MCP tool definitions across 4 dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Findability&lt;/strong&gt; (25pts) — Can agents discover your tool?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clarity&lt;/strong&gt; (35pts) — Can agents understand what it does?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Precision&lt;/strong&gt; (25pts) — Is the input schema complete?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; (15pts) — Is it token-efficient?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's like Lighthouse, but for MCP tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Action — score on every PR
&lt;/h2&gt;

&lt;p&gt;Today I published &lt;strong&gt;&lt;a href="https://github.com/marketplace/actions/toolrank-score" rel="noopener noreferrer"&gt;ToolRank Score&lt;/a&gt;&lt;/strong&gt; on GitHub Marketplace.&lt;/p&gt;

&lt;p&gt;Add this to your repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ToolRank Score&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*.json'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull-requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;imhiroki/toolrank-action@v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On every PR that touches tool definitions, you get a comment like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🟣 &lt;code&gt;mcp.json&lt;/code&gt; — &lt;strong&gt;95/100&lt;/strong&gt; (Dominant, top 3%)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;F&lt;/th&gt;
&lt;th&gt;C&lt;/th&gt;
&lt;th&gt;P&lt;/th&gt;
&lt;th&gt;E&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_web&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;95&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Top issues:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;search_web&lt;/code&gt;: No usage context — &lt;em&gt;Add 'Use this when...'&lt;/em&gt; (+5pt)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scored by &lt;a href="https://toolrank.dev" rel="noopener noreferrer"&gt;ToolRank&lt;/a&gt; v1.0.0&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can also set a minimum score and fail PRs that don't meet it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;imhiroki/toolrank-action@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;min-score&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;70&lt;/span&gt;
    &lt;span class="na"&gt;fail-on-low&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  REST API
&lt;/h2&gt;

&lt;p&gt;If you want to integrate scoring into your own workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://mcp.toolrank.dev/api/score &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"tools": [{"name": "search", "description": "Searches things"}]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns score, level, percentile, dimensions, and specific fix suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ecosystem data
&lt;/h2&gt;

&lt;p&gt;We scan 4,000+ MCP servers daily from Smithery and the Official MCP Registry. Some findings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;374 servers&lt;/strong&gt; have scorable tool definitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;73% of MCP servers&lt;/strong&gt; have zero tool definitions (invisible to agents)&lt;/li&gt;
&lt;li&gt;Average score: &lt;strong&gt;85.7/100&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Servers scoring 85+ get selected &lt;strong&gt;~85% of the time&lt;/strong&gt; in competitive scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check where your server ranks: &lt;a href="https://toolrank.dev/ranking" rel="noopener noreferrer"&gt;toolrank.dev/ranking&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Layer 2: Real LLM selection testing (Q2 2026)&lt;/li&gt;
&lt;li&gt;Agent framework integrations (LangChain, CrewAI)&lt;/li&gt;
&lt;li&gt;Registry partnerships&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The scoring engine is &lt;a href="https://github.com/imhiroki/toolrank" rel="noopener noreferrer"&gt;fully open source&lt;/a&gt;. Star it if this is useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://toolrank.dev/score" rel="noopener noreferrer"&gt;Score your tools&lt;/a&gt;&lt;/strong&gt; — paste JSON or enter your Smithery server name&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/marketplace/actions/toolrank-score" rel="noopener noreferrer"&gt;Add the GitHub Action&lt;/a&gt;&lt;/strong&gt; — 30 seconds to set up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://toolrank.dev/ranking" rel="noopener noreferrer"&gt;Check the ranking&lt;/a&gt;&lt;/strong&gt; — see where you stand&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;I'm building ToolRank as the quality standard for the MCP ecosystem. If you're building MCP tools, I'd love to hear what you think.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>githubactions</category>
      <category>devops</category>
    </item>
    <item>
      <title>We scanned 4,162 MCP servers. 73% are invisible to AI agents.</title>
      <dc:creator>Hiroki Honda</dc:creator>
      <pubDate>Sun, 29 Mar 2026 00:46:39 +0000</pubDate>
      <link>https://forem.com/imhiroki/we-scanned-4162-mcp-servers-73-are-invisible-to-ai-agents-2i8c</link>
      <guid>https://forem.com/imhiroki/we-scanned-4162-mcp-servers-73-are-invisible-to-ai-agents-2i8c</guid>
      <description>&lt;p&gt;There are 4,162 MCP servers registered on Smithery right now. The Python and TypeScript SDKs see 97 million monthly downloads. Every major AI provider has adopted MCP.&lt;/p&gt;

&lt;p&gt;But nobody had measured the &lt;strong&gt;quality&lt;/strong&gt; of these tools.&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://toolrank.dev" rel="noopener noreferrer"&gt;ToolRank&lt;/a&gt;, an open-source scoring engine that analyzes MCP tool definitions across four dimensions. Then we pointed it at the entire Smithery registry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The biggest finding wasn't about quality. It was about visibility.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  73% of MCP servers are invisible
&lt;/h2&gt;

&lt;p&gt;Out of 4,162 registered servers, only &lt;strong&gt;1,122 expose tool definitions&lt;/strong&gt; that agents can read.&lt;/p&gt;

&lt;p&gt;The remaining 3,040 — &lt;strong&gt;73%&lt;/strong&gt; — have no tool definitions at all. They're registered, but when an AI agent searches for tools, these servers don't exist. They have no name, no description, no schema. They are invisible.&lt;/p&gt;

&lt;p&gt;This is the equivalent of having a website with no indexable content. Google can't rank what it can't read. Agents can't select what they can't see.&lt;/p&gt;

&lt;h2&gt;
  
  
  Among the visible: average score 84.7/100
&lt;/h2&gt;

&lt;p&gt;For the 1,122 servers that do expose tool definitions, we scored each one across four dimensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;th&gt;What it measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Findability&lt;/strong&gt; (25%)&lt;/td&gt;
&lt;td&gt;Can agents discover you?&lt;/td&gt;
&lt;td&gt;Registry presence, naming&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Clarity&lt;/strong&gt; (35%)&lt;/td&gt;
&lt;td&gt;Can agents understand you?&lt;/td&gt;
&lt;td&gt;Description quality, purpose, context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Precision&lt;/strong&gt; (25%)&lt;/td&gt;
&lt;td&gt;Is your schema precise?&lt;/td&gt;
&lt;td&gt;Types, enums, required fields&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; (15%)&lt;/td&gt;
&lt;td&gt;Are you token-efficient?&lt;/td&gt;
&lt;td&gt;Definition size, tool count&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Average ToolRank Score: 84.7/100&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;%&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Dominant&lt;/td&gt;
&lt;td&gt;85-100&lt;/td&gt;
&lt;td&gt;677&lt;/td&gt;
&lt;td&gt;60%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Preferred&lt;/td&gt;
&lt;td&gt;70-84&lt;/td&gt;
&lt;td&gt;406&lt;/td&gt;
&lt;td&gt;36%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Selectable&lt;/td&gt;
&lt;td&gt;50-69&lt;/td&gt;
&lt;td&gt;39&lt;/td&gt;
&lt;td&gt;3.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visible&lt;/td&gt;
&lt;td&gt;25-49&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Absent&lt;/td&gt;
&lt;td&gt;0-24&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The average is higher than expected. But there's a survivorship bias: servers that bother to expose tool definitions tend to be better maintained overall. The real quality problem is the 73% that expose nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top scoring servers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Server&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;microsoft/learn_mcp&lt;/td&gt;
&lt;td&gt;96.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;docfork/docfork&lt;/td&gt;
&lt;td&gt;96.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;brave (Brave Search)&lt;/td&gt;
&lt;td&gt;94.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;LinkupPlatform/linkup-mcp-server&lt;/td&gt;
&lt;td&gt;93.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;smithery-ai/national-weather-service&lt;/td&gt;
&lt;td&gt;93.3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What do top servers do differently? They start descriptions with clear action verbs. They include usage context ("Use this when..."). They define required fields, enums, and defaults. They keep tool count under 15.&lt;/p&gt;

&lt;h2&gt;
  
  
  The most common defects
&lt;/h2&gt;

&lt;p&gt;Among the 1,122 scored servers, the most frequent issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Missing usage context&lt;/strong&gt; — Description says what the tool does, but not &lt;em&gt;when&lt;/em&gt; to use it. Agents need "Use this when..." to decide between competing tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No return value described&lt;/strong&gt; — Agents can't predict what they'll get back. This leads to incorrect downstream handling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Missing parameter descriptions&lt;/strong&gt; — Schema has types but no explanations. An agent sees &lt;code&gt;"query": {"type": "string"}&lt;/code&gt; but doesn't know what format the query should be in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No required fields defined&lt;/strong&gt; — Agents guess which parameters are mandatory, leading to failed executions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description too short&lt;/strong&gt; — Under 50 characters. Not enough information for reliable selection.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What does this mean?
&lt;/h2&gt;

&lt;p&gt;Academic research backs up why this matters. A study of 10,831 MCP servers (arXiv 2602.18914) found that tools with quality-compliant descriptions achieve &lt;strong&gt;72% selection probability&lt;/strong&gt; vs 20% for non-compliant ones. That's a &lt;strong&gt;3.6x advantage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The fixes are trivial. Adding "Use this when..." to a description. Defining &lt;code&gt;required&lt;/code&gt; fields. Starting the description with a verb. These are 5-minute changes with measurable impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  We're calling this ATO
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ATO (Agent Tool Optimization)&lt;/strong&gt; is the practice of optimizing tools so AI agents autonomously discover, select, and execute them.&lt;/p&gt;

&lt;p&gt;SEO optimized for search engines. LLMO optimized for LLM citations. ATO optimizes for agent tool selection.&lt;/p&gt;

&lt;p&gt;The key difference: SEO and LLMO result in mentions and links. ATO results in your API being called and transactions occurring. LLMO is Stage 1 of ATO — necessary but not sufficient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://toolrank.dev/framework" rel="noopener noreferrer"&gt;Full ATO framework →&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Score your tools:&lt;/strong&gt; &lt;a href="https://toolrank.dev/score" rel="noopener noreferrer"&gt;toolrank.dev/score&lt;/a&gt; — paste your tool definition JSON. Includes "Try example" buttons so you can see how scoring works instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ecosystem ranking:&lt;/strong&gt; &lt;a href="https://toolrank.dev/ranking" rel="noopener noreferrer"&gt;toolrank.dev/ranking&lt;/a&gt; — live rankings updated weekly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scoring engine:&lt;/strong&gt; &lt;a href="https://github.com/imhiroki/toolrank" rel="noopener noreferrer"&gt;github.com/imhiroki/toolrank&lt;/a&gt; — fully open source. The scoring logic is transparent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methodology
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data source:&lt;/strong&gt; Smithery Registry API (registry.smithery.ai)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scan date:&lt;/strong&gt; March 28, 2026&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Servers in registry:&lt;/strong&gt; 4,162&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Servers with tool definitions:&lt;/strong&gt; 1,122 (27%)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoring level:&lt;/strong&gt; Level A (rule-based, 14 checks, zero LLM cost)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request interval:&lt;/strong&gt; 2 seconds (respectful to Smithery infrastructure)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full scan time:&lt;/strong&gt; ~2 hours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/imhiroki/toolrank/blob/main/packages/scoring/toolrank_score.py" rel="noopener noreferrer"&gt;Open source&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data updates weekly via automated scanning. Daily differential scans catch new servers.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://github.com/imhiroki" rel="noopener noreferrer"&gt;@imhiroki&lt;/a&gt;. Questions, feedback, or want to improve your score? Open an issue on &lt;a href="https://github.com/imhiroki/toolrank/issues" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
