<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Femi Raphael</title>
    <description>The latest articles on Forem by Femi Raphael (@0xfemyn).</description>
    <link>https://forem.com/0xfemyn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/0xfemyn"/>
    <language>en</language>
    <item>
      <title>How to Plug AIsa into your Hermes Agent in 2 Minutes (Without Rebuilding Your Setup)</title>
      <dc:creator>Femi Raphael</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:51:03 +0000</pubDate>
      <link>https://forem.com/0xfemyn/how-to-plug-aisa-into-your-hermes-agent-in-2-minutes-without-rebuilding-your-setup-28m9</link>
      <guid>https://forem.com/0xfemyn/how-to-plug-aisa-into-your-hermes-agent-in-2-minutes-without-rebuilding-your-setup-28m9</guid>
      <description>&lt;p&gt;Most people make Hermes way harder to run than it needs to be.&lt;br&gt;
They get the runtime working, but then they start changing providers, changing model configs, changing endpoints, and before long the setup itself becomes the problem. &lt;/p&gt;

&lt;p&gt;The easier approach is to keep Hermes pointed at one OpenAI-compatible layer and switch models from there.&lt;/p&gt;

&lt;p&gt;That is exactly where &lt;a href="https://aisa.one/" rel="noopener noreferrer"&gt;AIsa&lt;/a&gt; fits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04h4d5wfql6gkzpzkl3f.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04h4d5wfql6gkzpzkl3f.PNG" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hermes already supports any OpenAI-compatible API as a custom provider. AIsa is OpenAI-compatible and uses a single base URL at &lt;code&gt;https://api.aisa.one/v1&lt;/code&gt;, so the integration path is just the normal Hermes custom endpoint flow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No special adapters&lt;/li&gt;
&lt;li&gt;No hacky workarounds&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Here are the two ways to set it up.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Method 1: The CLI Setup (Quickest)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fastest way to get routing is through the Hermes CLI.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Generate your API key on the AIsa &lt;a href="https://marketplace.aisa.one/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;hermes model&lt;/code&gt; in your terminal&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Custom endpoint&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the AIsa base URL: &lt;code&gt;https://api.aisa.one/v1&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter your AIsa API key (the one you generated from the marketplace dashboard)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pick the model you want Hermes to call (e.g., qwen-3.6-plus or claude-opus-4-6)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hermes’ own quickstart supports this flow, meaning you can easily switch providers or models anytime just by running &lt;code&gt;hermes model&lt;/code&gt; again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: The Config Setup&lt;/strong&gt;&lt;br&gt;
If you prefer keeping things in code rather than using the setup wizard, Hermes supports the custom provider path directly in your config file.&lt;/p&gt;

&lt;p&gt;Open up &lt;code&gt;~/.hermes/config.yaml&lt;/code&gt; and drop this in:&lt;br&gt;
YAML&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom&lt;/span&gt;
  &lt;span class="na"&gt;base_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://api.aisa.one/v1&lt;/span&gt;
  &lt;span class="na"&gt;api_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YOUR_AISA_API_KEY&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YOUR_MODEL_NAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why do it this way?
&lt;/h2&gt;

&lt;p&gt;The benefit here is simple. AIsa gives you one base URL and one API key for accessing every major model, while keeping the request format strictly OpenAI-compatible.&lt;/p&gt;

&lt;p&gt;You also get provider-agnostic usage tracking and unified billing. This means you aren't rebuilding your Hermes setup or managing five different API dashboards every time you want to test if Llama 3 handles a task better than GPT-4o.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Reality Check on Fallbacks
&lt;/h2&gt;

&lt;p&gt;If you are doing this for reliability, Hermes recently added support for ordered fallback provider chains through &lt;code&gt;fallback_providers&lt;/code&gt; in the config file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; here are currently a few fresh bug reports showing fallback issues on the Hermes API server path.&lt;/p&gt;

&lt;p&gt;So the cleanest, most stable recommendation right now is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with AIsa as your Hermes custom provider&lt;/li&gt;
&lt;li&gt;Get your primary model working stably first&lt;/li&gt;
&lt;li&gt;Add the fallback logic later once the base path is solid&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This gives you a much simpler setup, better model switching, and way less provider churn while you’re building.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgdfv3ymslzbs9oeyiga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgdfv3ymslzbs9oeyiga.png" alt=" " width="800" height="154"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;(P.S. AIsa gives new users free API credits, so you can test this exact routing setup on your agent right now without putting in a card).&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>hermes</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
