<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: AgentOne</title>
    <description>The latest articles on Forem by AgentOne (@agent-one).</description>
    <link>https://forem.com/agent-one</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/agent-one"/>
    <language>en</language>
    <item>
      <title>Introducing the AI Model Directory</title>
      <dc:creator>BestCodes</dc:creator>
      <pubDate>Mon, 04 May 2026 14:33:33 +0000</pubDate>
      <link>https://forem.com/agent-one/introducing-the-ai-model-directory-43jd</link>
      <guid>https://forem.com/agent-one/introducing-the-ai-model-directory-43jd</guid>
      <description>&lt;p&gt;Today we're open-sourcing the &lt;a href="https://github.com/The-Best-Codes/ai-model-directory" rel="noopener noreferrer"&gt;AI Model Directory&lt;/a&gt;, the most comprehensive, automatically updated list of AI models and their metadata available today. It's the data layer that powers model selection in &lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;AgentOne&lt;/a&gt;, and now it's free for anyone to use, fork, or contribute to.&lt;/p&gt;

&lt;p&gt;If you'd rather just look at models, we also built a browser for the directory at &lt;a href="https://models.agent-one.dev" rel="noopener noreferrer"&gt;models.agent-one.dev&lt;/a&gt; where you can search, sort, and compare every model in the directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does This Exist?
&lt;/h2&gt;

&lt;p&gt;When building &lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;AgentOne&lt;/a&gt;, I needed a comprehensive list of AI models and their metadata - costs, context windows, supported features, modalities - so AgentOne could give users easy access to &lt;em&gt;every&lt;/em&gt; model an AI provider had to offer.&lt;/p&gt;

&lt;p&gt;I was frustrated with the existing options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Models.dev&lt;/strong&gt; is not comprehensive (it's opinionated), and it often takes anywhere from a few days to weeks for frontier models to be added across all providers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LiteLLM&lt;/strong&gt; is more comprehensive for some providers, but the data is fragmented and harder to work with&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portkey Models&lt;/strong&gt; doesn't list as many models as alternatives do&lt;/li&gt;
&lt;li&gt;Other catalogs are often developed with a certain product or service in mind, so they wind up being non-agnostic, not comprehensive, or not always up-to-date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI Model Directory aims to be easy to use (like Models.dev), truly comprehensive across every provider it includes, and automatically updated with security in mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;A GitHub Actions workflow runs every 24 hours and re-fetches model metadata from every supported provider. Each provider has its own small adapter that knows how to talk to that provider's API or read its docs, and normalizes the response into a single shared schema covering things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pricing&lt;/strong&gt;: input, output, reasoning, cache read/write, audio in/out&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limits&lt;/strong&gt;: context, input, and output token limits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modalities&lt;/strong&gt;: text, image, audio, video, file (in and out)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Features&lt;/strong&gt;: attachments, reasoning, tool calls, structured output, temperature&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata&lt;/strong&gt;: knowledge cutoff, release date, last updated, open weights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every model gets its own folder under &lt;code&gt;data/providers/&amp;lt;provider&amp;gt;/&amp;lt;model-id&amp;gt;/index.toml&lt;/code&gt;, so the directory is just a tree of TOML files. This makes it easy to read, easy to diff, and easy to consume from any language. If a provider's data is wrong or missing something, you can drop a &lt;code&gt;metadata.toml&lt;/code&gt; (with data overrides) next to the generated file and the next refresh will merge your overrides on top of the fetched data instead of clobbering them.&lt;/p&gt;

&lt;p&gt;To provide an experience similar to &lt;code&gt;models.dev/api.json&lt;/code&gt;, a &lt;code&gt;data/all.json&lt;/code&gt; file is automatically generated as well, so you can pull the entire directory in one fetch. We also provide a &lt;code&gt;data/all.min.json&lt;/code&gt; file for less bandwidth consumption:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/The-Best-Codes/ai-model-directory/refs/heads/main/data/all.min.json" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/The-Best-Codes/ai-model-directory/refs/heads/main/data/all.min.json&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's In the Directory?
&lt;/h2&gt;

&lt;p&gt;At launch, the directory tracks models from &lt;strong&gt;35+ providers&lt;/strong&gt;, including OpenAI, Anthropic, Google, xAI, Mistral, DeepSeek, Cohere, Perplexity, OpenRouter, Vercel, GitHub Copilot, GitHub Models, Hugging Face, Groq, Cerebras, Fireworks, Together, DeepInfra, Baseten, Novita, Alibaba, Inception, Venice, Chutes, Friendli, and many more... and that list keeps growing. If your favorite provider isn't there, &lt;a href="https://github.com/The-Best-Codes/ai-model-directory/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt; or send a PR; adding a new provider is usually a single small adapter file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browse It at models.agent-one.dev
&lt;/h2&gt;

&lt;p&gt;Reading TOML files is great for machines, but not always great for humans. So we built a frontend for the directory at &lt;a href="https://models.agent-one.dev" rel="noopener noreferrer"&gt;models.agent-one.dev&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It's a fast, sortable, searchable table with a column for everything in the schema. You can search across providers, model IDs, features, and modalities at once, sort by any column, and click straight through to a provider's website. It's the easiest way to answer questions like "which models support reasoning &lt;strong&gt;and&lt;/strong&gt; tool calls under $1 per million input tokens?"&lt;/p&gt;

&lt;p&gt;The table loads directly from &lt;code&gt;data/all.min.json&lt;/code&gt; in the directory repo, so it's always in sync with the latest run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using It in Your Own Project
&lt;/h2&gt;

&lt;p&gt;Consuming the directory is easy. Hit the raw GitHub URL for the bundled file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/The-Best-Codes/ai-model-directory/main/data/all.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/The-Best-Codes/ai-model-directory/main/data/all.min.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get back a JSON object keyed by provider, with each provider's models nested inside. This is the easiest path if you just need to populate a model picker or a pricing table. Because everything is plain files, you can fork the repo, add your own provider adapters, drop in &lt;code&gt;metadata.toml&lt;/code&gt; for models you've measured yourself, and run the same GitHub Actions workflow on your fork. Your fork stays in sync with upstream while keeping your overrides intact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Because the directory is updated automatically based on data fetched from third-party providers, the data here is only as trustworthy as the providers it comes from. If you're using this to make billing or routing decisions, treat it as a strong default and not as gospel. We have several measures in place to mitigate the obvious vulnerabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provider endpoints are &lt;strong&gt;hardcoded in source&lt;/strong&gt;, so providers cannot redirect the updater to arbitrary user-controlled URLs&lt;/li&gt;
&lt;li&gt;All fetched data is &lt;strong&gt;validated against a strict Zod schema&lt;/strong&gt; before it's written to disk, which helps prevent malformed or unexpected fields from slipping through&lt;/li&gt;
&lt;li&gt;Model IDs are &lt;strong&gt;normalized into safe directory names&lt;/strong&gt; before writing, and entries whose normalized name would be empty are rejected&lt;/li&gt;
&lt;li&gt;If multiple model IDs normalize to the same directory name, we resolve that &lt;strong&gt;deterministically&lt;/strong&gt; instead of writing multiple conflicting directories&lt;/li&gt;
&lt;li&gt;Terminal output is &lt;strong&gt;sanitized&lt;/strong&gt; before logging, which reduces the risk of ANSI escape sequences or control characters spoofing the updater output&lt;/li&gt;
&lt;li&gt;Every network fetch has a &lt;strong&gt;60 second timeout&lt;/strong&gt; so a slow or hostile provider can't hang the update job forever&lt;/li&gt;
&lt;li&gt;IDs and names are &lt;strong&gt;length-limited&lt;/strong&gt; and reject raw control characters, which helps defend against weird escapes, invisible junk in logs, and other malformed provider output&lt;/li&gt;
&lt;li&gt;Generated model directories that no longer exist upstream are &lt;strong&gt;removed automatically&lt;/strong&gt; on refresh&lt;/li&gt;
&lt;li&gt;Overrides stay local: &lt;code&gt;metadata.toml&lt;/code&gt; only applies to that model directory and is merged on top of fetched data&lt;/li&gt;
&lt;li&gt;The updater &lt;strong&gt;does not execute&lt;/strong&gt; provider-supplied code, shell commands, or HTML; it only fetches remote content, parses it, validates it, and writes normalized TOML files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That said, this is still provider-supplied metadata. A provider can lie about pricing, capabilities, limits, or release dates, and some providers expose better metadata than others. The goal here is to make the pipeline safe and robust, not to pretend third-party metadata is perfectly trustworthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This is a beta release, so expect a few rough edges. Some of the things we're working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More providers (especially regional and self-hosted offerings)&lt;/li&gt;
&lt;li&gt;A proper docs site&lt;/li&gt;
&lt;li&gt;Programmatic SDKs for JS/TS, Python, and Go&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to help shape any of this, &lt;a href="https://www.agent-one.dev/discord" rel="noopener noreferrer"&gt;join us on Discord&lt;/a&gt;, &lt;a href="https://github.com/The-Best-Codes/ai-model-directory/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt;, or send a PR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Browse the data: &lt;a href="https://models.agent-one.dev" rel="noopener noreferrer"&gt;models.agent-one.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read or fork the source: &lt;a href="https://github.com/The-Best-Codes/ai-model-directory" rel="noopener noreferrer"&gt;github.com/The-Best-Codes/ai-model-directory&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Use it in your app: &lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;AgentOne&lt;/a&gt; already does&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy building!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How to Use GPT 5.5 for Agentic Coding</title>
      <dc:creator>BestCodes</dc:creator>
      <pubDate>Sat, 25 Apr 2026 00:41:31 +0000</pubDate>
      <link>https://forem.com/agent-one/how-to-use-gpt-55-for-agentic-coding-2o8e</link>
      <guid>https://forem.com/agent-one/how-to-use-gpt-55-for-agentic-coding-2o8e</guid>
      <description>&lt;p&gt;OpenAI's GPT 5.5 is one of the most capable models available for agentic coding - writing code, using tools, running commands, and iterating on complex tasks autonomously. In this guide, you'll learn how to set up GPT 5.5 in &lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;AgentOne&lt;/a&gt; and start using it for agentic coding workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Agentic Coding?
&lt;/h2&gt;

&lt;p&gt;Agentic coding is when an AI model doesn't just suggest code, it actively writes, runs, debugs, and iterates on code using tools. Instead of copy-pasting snippets from a chatbot, you give the agent a task and it handles the implementation end to end.&lt;/p&gt;

&lt;p&gt;GPT 5.5 excels at this because of its strong tool use capabilities, large context window, and ability to follow multistep instructions reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;AgentOne&lt;/a&gt; desktop app installed&lt;/li&gt;
&lt;li&gt;An OpenAI API key with access to GPT 5.5&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don't have an OpenAI API key yet, head to &lt;a href="https://platform.openai.com" rel="noopener noreferrer"&gt;platform.openai.com&lt;/a&gt; to create one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Open Provider Settings
&lt;/h2&gt;

&lt;p&gt;Launch AgentOne and open &lt;strong&gt;Settings&lt;/strong&gt;. Navigate to the &lt;strong&gt;Provider&lt;/strong&gt; section. This is where you manage all your AI providers and API keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipqnkpm2ieb4msv3rng4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipqnkpm2ieb4msv3rng4.webp" alt="AgentOne provider settings" width="800" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Add a Custom OpenAI-Compatible Provider
&lt;/h2&gt;

&lt;p&gt;GPT 5.5 isn't in AgentOne's built-in model list yet, so you'll add it as a custom provider. Click the &lt;strong&gt;Add Provider&lt;/strong&gt; button in the top right, then select &lt;strong&gt;OpenAI Compatible&lt;/strong&gt; from the dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39da95chr5tivpg2qs8z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39da95chr5tivpg2qs8z.webp" alt="Add provider dropdown" width="579" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A dialog will appear with fields to configure the new provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Configure the Provider
&lt;/h2&gt;

&lt;p&gt;Fill in the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: &lt;code&gt;GPT 5.5&lt;/code&gt; (or whatever you'd like to call it)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: &lt;code&gt;https://api.openai.com/v1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: Your OpenAI API key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53t3mnpepz7t3xe59gii.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53t3mnpepz7t3xe59gii.webp" alt="Provider configuration dialog" width="376" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can leave the &lt;strong&gt;Custom Headers&lt;/strong&gt; section empty, as it's only needed for providers that require extra authentication headers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Add the Model
&lt;/h2&gt;

&lt;p&gt;In the same dialog, scroll down to the &lt;strong&gt;Models&lt;/strong&gt; section. You have two options:&lt;/p&gt;

&lt;h3&gt;
  
  
  Option a: Auto-Fetch Models
&lt;/h3&gt;

&lt;p&gt;Click the &lt;strong&gt;Auto&lt;/strong&gt; button. AgentOne will call OpenAI's &lt;code&gt;/models&lt;/code&gt; endpoint and pull in all available models. Find &lt;code&gt;gpt-5.5&lt;/code&gt; in the list and remove any models you don't need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option B: Add Manually
&lt;/h3&gt;

&lt;p&gt;Click the &lt;strong&gt;Add&lt;/strong&gt; button to open the model form. Fill in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model ID&lt;/strong&gt;: &lt;code&gt;gpt-5.5&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Display Name&lt;/strong&gt;: &lt;code&gt;GPT 5.5&lt;/code&gt; (optional, for a cleaner label in the UI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure both &lt;strong&gt;Supports Tools&lt;/strong&gt; and &lt;strong&gt;Supports Images&lt;/strong&gt; are toggled on… GPT 5.5 supports both!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1410u0a2ahbjdl2da84z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1410u0a2ahbjdl2da84z.webp" alt="Add model form" width="371" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add Model&lt;/strong&gt;, then click &lt;strong&gt;Add Provider&lt;/strong&gt; to save everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Select GPT 5.5 in the Model Selector
&lt;/h2&gt;

&lt;p&gt;Go back to the main chat view. Click the model selector (the model name shown near the input area) and you should see &lt;strong&gt;GPT 5.5&lt;/strong&gt; listed under your custom provider. Select it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrdp28ak65n0m8f76wqw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrdp28ak65n0m8f76wqw.webp" alt="Model selector showing GPT 5.5" width="587" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it. You're now using GPT 5.5 for all your conversations in AgentOne.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Start Agentic Coding
&lt;/h2&gt;

&lt;p&gt;With GPT 5.5 selected, you can give AgentOne tasks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Create a REST API with Express and PostgreSQL"&lt;/li&gt;
&lt;li&gt;"Refactor this component to use React hooks"&lt;/li&gt;
&lt;li&gt;"Write tests for the auth module"&lt;/li&gt;
&lt;li&gt;"Find and fix the bug in the payment flow"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GPT 5.5 will use AgentOne's tool system to read your files, write code, run terminal commands, and iterate until the task is done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips for Getting the Best Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Be specific.&lt;/strong&gt; Instead of "make it better," say "add input validation to the sign-up form and return proper error messages."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give context.&lt;/strong&gt; Mention the framework, language, and any constraints. "Use TypeScript, Prisma, and the existing database schema in &lt;code&gt;prisma/schema.prisma&lt;/code&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let it iterate.&lt;/strong&gt; Agentic coding works best when you let the model run commands, see errors, and fix them on its own.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Adding Other GPT 5.5 Variants
&lt;/h2&gt;

&lt;p&gt;OpenAI offers several GPT 5.5 model variants. You can add multiple model IDs to the same provider:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model ID&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gpt-5.5&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;General agentic coding tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gpt-5.5-pro&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;More expensive, but smarter&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To add more models, go to &lt;strong&gt;Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Providers&lt;/strong&gt;, expand your GPT 5.5 provider, and use the &lt;strong&gt;Add&lt;/strong&gt; button in the Models section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Setting up GPT 5.5 in AgentOne takes less than a minute. Once configured, you get a powerful agentic coding environment where GPT 5.5 can read, write, and run your code autonomously.&lt;/p&gt;

&lt;p&gt;If you run into issues, make sure your OpenAI API key has access to the &lt;code&gt;gpt-5.5&lt;/code&gt; model and that your base URL is set to &lt;code&gt;https://api.openai.com/v1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Give it a try! &lt;a href="https://www.agent-one.dev" rel="noopener noreferrer"&gt;Download AgentOne&lt;/a&gt; and start building with GPT 5.5 today. Need help? Join the &lt;a href="https://www.agent-one.dev/discord" rel="noopener noreferrer"&gt;AgentOne Discord server&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
